Skip to main content

"The Diffusion Pilot" - short film


table of contents

I've graduated! June 2024, with a fresh Master's degree in animation, I now also have a short film created with techniques described in this blog. This is a quick update for those following my creative progress, a brief introduction to how the film came to be, and some words to elaborate on it.

The Diffusion Pilot - film poster

Change of plans

The short graduation film is alluded to or mentioned explicitly several times throughout this blog up until this point. However the reality is that I did not end up making the film I set out to make, and as I was approaching this change of plans, I was sneakily editing the wording about it in my blog posts.

Fighting a losing battle against time like most animators, I pushed the initial ambitious vision aside to instead improvise a quick demo of my techniques in development to show in time for graduation. However to my own surprise, the final result surpassed this humble plan and stands on it's own as a pivotal piece in my creative career. From countless experiments cluttering my computer's work drive and my own head, I was compelled to distill confident ideas and direction, which empowered me to narrate a reflective journey of "piloting" generative image AI in a short 7 minute animated film format, which will now be travelling across festivals.

"The engine has started up" - getting absorbed in my AI animation lab

The beginnings of "diffusion pilot" was about developing tools for a specific animated narrative film I wanted to make (namely with 3D-to-AI transformative techniques), yet it soon took a life of its own becoming an internet blog, a methodology, an evolving toolkit. That was all to form a multi faceted marriage between art of animation, and modern generative image AI. There was no longer one without the other, and I switched gears to think about generative image AI as its own rich animation medium in a general sense, which is especially reflected in my latest essay.

I got more and more invested into prototyping animation techniques and interfaces for them, inspired by animation tradition and established software, but using generative image AI as an integral part of it.

Screenshot of the main WIP panel for the diffusion pilot, prototyped inside TouchDesigner. Similarly to established animation software, a frame timeline together with parameter graphs can be seen at the bottom, and a small toolbar is present on the left (with only 3 tools so far). One of the tools is a simple brush!

My own terminology for it was also evolving, and the "engine" was what I called the heart of my prototype toolkit, the component that was tirelessly "propelling" the whole system frame-by-frame forward through time, generating and regenerating image after image using Stable Diffusion. This notion stuck with me and made its way into the film as an opener to its narration and an element of playful analogies.

"Should we make infinite film?" - provocation through narration

I've recorded my own voice for the narration of my film. This is a fact that I would find very perplexing and daunting a year ago, yet I find it kind of comforting now, because it frames film's surreal and jarring animated sequences to be more tangible and conceptually provocative. That's the idea at least, but from the few comments and feedback that I've received, it seems to have worked!

A portion of "recordings" - animation snippets made with diffusion pilot either with aimless experimentation or methodical search.

Before deciding on narration though, the rapid prototyping and development phase of diffusion pilot made me gather plenty of animation experiments and test material, which often resembled some sort of journey when looked at as a whole, likely inspired by my recurring notion of piloting. In parallel to that, I found myself writing down endless profound ideas, questions, and emotional unrest regarding where AI is heading to as a cultural and technological phenomenon, from which I carved few concise lines of thought. This later became the context and narration for the film. With that the route of the journey was clearing up, and overall structure for the film had formed. Majority of shots ended up being done after this structure had formed, with placeholder material from tests being replaced by more curated, refined animated sequences that I did only over the last 2 months.

Ideally, I hope the film is stimulating not only technically, but also philosophically and emotionally, because despite my obsession with technique and craft during this process, it ultimately comes down to wholistic ideas about art, human nature, and our feelings about them. Every viewer may recognize or imagine different symbols and threads of though depending on their background, but I hope all realize that our view on "AI art" cannot be a bipolar conflict of opposing ideas, and that it demands nuanced views and deeper interrogation.

What's next

The Diffusion Pilot toolkit

At this point I'd love to open up my prototype tools that I've been building using TouchDesigner to more people, however it is still a rather duck tape and hot glue kind of contraption, that was evolving not daily, but hourly as I was using and producing my film through it.

Ideally, It would turn into a collaborative effort, where gaps in my knowledge and skills could be filled in by people more competent in coding, optimizing, and stitching elements together across various domains. For instance, there are more than enough papers on specific, clever, and useful techniques that revolve around generative image AI or 3D synthesis and manipulation, but realizing them into seamless tools is quite overwhelming to me, although helped a lot by efforts from individuals such as dotsimulate or olegchomp. After all, I am a techy artist with great efficiency in TouchDesigner, Blender, and other artist focused tools, but not quite a one-man developer army.

If I remain by myself, I could still polish these tools enough to where I could at least run creative workshops with small groups of people. The capabilities of these tools are not quite revolutionary (yet), and can be found across existing apps and tools that utilize Stable Diffusion. However, at the very least I hope to put a kind of flavor on it that makes it a unique creative experience, especially for animators, as discussed in my essay on "Piloting".

In any case, in the future expect a dedicated page on this blog that documents features of this toolkit in a systematic way as it evolves. Who knows, maybe I'm just burned out now, and with time It'll turn into one of those Patreon gigs for me, or become a repo on GitHub.

New films and animated pieces

I would still really really like to make the film I initially set out to make using techniques of generative AI and 3D. On Q4 of 2022 I was actually working on layout/animatic type of thing for it, before falling into the rabbit hole of AI. I am not making any concrete plans yet, but wish me luck. One option for instance is to make it an official production in Lithuania funded by state culture funds.

Experiments and miscellaneous output

Besides fully fledged films, I hope to share more snippets, experiments, b-sides and pieces of animation. There are plenty that have accumulated already, and endless more ready to be realized once awarded some of my time. I'll be posting them either here in blog posts, or on a dedicated page on Zora. Because I do not use Instagram, something like Zora seems like an interesting alternative way to share stuff online outside the bubble that is this blog. There is also X, but it's more of a dull, irritating necessity for me, instead of a fun way to share my creative output and progress.

a fabricated fading memory on Zora