Skip to main content

"The Diffusion Pilot" - short film


table of contents

I've graduated! June 2024, with a fresh Master's degree in animation, I now also have a short film created with techniques described in this blog. This is a quick update, a brief introduction to how the film came to be, and some words to elaborate on it.

Change of plans

My short graduation film is alluded to or mentioned explicitly several times throughout this blog up until this point. However the reality is that I did not end up making the film I set out to make, and as I was approaching this change of plans, I was sneakily editing the wording about it in my blog posts.

Fighting a losing battle against time like most animators, I pushed the initial ambitious vision aside to instead improvise a quick demo of my WIP techniques in development to show in time for graduation. However to my own surprise, the final result surpassed this humble plan and stands on it's own as a meaningful piece in my creative career. From countless experiments cluttering my computer's work drive and my own head, I was compelled to distill confident ideas and structure, which lead me to narrate a contemplative journey of "piloting" generative image AI in a short 7 minute animated film format, which is now travelling across festivals, starting with Dok Leipzig!

"The engine has started up" - getting absorbed in my AI animation lab

The beginnings of "diffusion pilot" was about developing a workflow and tools for a specific animated narrative film I wanted to make using intricate and layered digital techniques involving 3D, digital painting and frame-by-frame generative image AI. However, since then it grew way past its initial scope, becoming an internet blog, a methodology, an evolving toolkit. That was all to try and intertwine art of animation with modern generative image AI into a meaningful and unique craft. I hoped to escape the inherit mediocrity of all "AI slop" by experimenting towards something genuinely compelling. There was no longer one without the other, and I switched gears to think about generative image AI as its own animation medium in a general sense, which is especially reflected in my latest essay.

I got more and more invested into prototyping animation techniques and interfaces for them, inspired by animation tradition and established software, but using generative image AI as an integral part of it.

Screenshot of the main WIP panel for the diffusion pilot, prototyped inside TouchDesigner. Similarly to established animation software, a frame timeline together with parameter graphs can be seen at the bottom, and a small toolbar is present on the left (with only 3 tools so far). One of the tools is a simple brush!

My own terminology for it was also evolving, and the "engine" was what I called the heart of my prototype toolkit, the component that was tirelessly "propelling" the whole system frame-by-frame forward through time, generating and regenerating image after image using Stable Diffusion. This notion stuck with me and made its way into the film as an opener to its narration and an element of playful analogies.

"Should we make infinite film?" - provocation through narration

I've recorded my own voice for the narration of my film. This is not something I thought I would be ever doing in my creative career, but with it I hoped to contextualize film's surreal and jarring animated sequences to be more tangible and conceptually provocative. And while some of the sequences in the film intentionally or not look like the stereotypical "AI animation" from around 2021-2023, hopefully the spoken context will give clues that it wasn't made for same reasons, or using the same surface level methods.

A portion of "recordings" - animation snippets made with diffusion pilot either with aimless experimentation or methodical search.

Before deciding on narration though, the rapid prototyping and development phase of diffusion pilot project made me gather plenty of animation experiments and test material, which often resembled some sort of journey when looked at as a whole, likely inspired by my recurring notion of piloting. In parallel to that, I found myself writing down endless ideas, questions, and emotional unrest regarding where AI is heading to as a cultural and technological phenomenon, from which I carved few concise lines of thought. This later became the context and narration for the film. With that the route of the journey was clearing up, and overall structure for the film had formed. Majority of shots ended up being done after this structure had formed, with placeholder material from tests being replaced by more curated, refined animated sequences that I did over the last 2 months.

Ideally, I hope the film is stimulating not only on the technical level, but also philosophically and emotionally. Despite my initial obsession with technique and craft of animation, ultimately I was grappling with wholistic ideas about art, human nature, and our feelings about them. Every viewer may recognize or imagine different symbols and threads of though depending on their background, but I hope all realize that our view on "AI art" cannot be a bipolar conflict of opposing ideas, and that it demands nuanced views and deeper interrogation.

What's next

The Diffusion Pilot toolkit

At this point I'd love to open up my prototype tools that I've been building using TouchDesigner to more people, however it is still a rather duck tape and hot glue kind of contraption, that was evolving not daily, but hourly as I was using and producing my film through it.

Ideally, It would turn into a collaborative effort, where gaps in my knowledge and skills could be filled in by people more competent in coding, optimizing, and stitching elements together across various domains. There are more than enough papers on specific, clever, and useful techniques that revolve around generative image AI or 3D synthesis and manipulation, but realizing them into seamless tools is quite overwhelming to me, although helped a lot by efforts from individuals such as dotsimulate or olegchomp. While I am quite a technical artist, I am not quite a one-man developer army.

If I remain by myself, I could still polish these tools enough to where I could at least run creative workshops with small groups of people. The capabilities of these tools are not necessarily revolutionary and can be found to some degree across existing apps and tools that utilize Stable Diffusion. However, at the very least I hope to put a kind of flavor on it that makes it a unique creative experience, especially for animators, as discussed in my essay on "Piloting".

In any case, in the future expect a dedicated page on this blog that documents features of this toolkit in a systematic way as it evolves. Who knows, maybe I'm just burned out now, and with time It'll turn into one of those Patreon gigs for me, or become a repo on GitHub.

New films and animated pieces

I would still really really like to make the film I initially set out to make using techniques of generative AI and 3D. On Q4 of 2022 I was working out a layout/animatic for it before falling into the rabbit hole of AI. I am not making any concrete plans yet, but wish me luck. One option for instance is to make it an official production in Lithuania funded by state culture funds.

Experiments and miscellaneous output

Besides fully fledged films, I hope to share more snippets, experiments, b-sides and pieces of animation. There are plenty that have accumulated already, and endless more ready to be realized once awarded some of my time. I'll be posting them either here in blog posts, or perhaps trying Zora as an alternative to Instagram.