Mini-Essay / Reflection
This project evolved in a way I didn’t originally expect, and in many ways, the final result reflects both the possibilities and limitations of AI-driven creative tools. When I began working on this piece, my intention was to take photographs I had shot previously and animate them using Runway. The idea was to create smooth transitions that would bridge one photograph to another, like folding time, space, and perspective into a seamless visual flow. Runway has tools for frame interpolation that, in theory, could generate the in-between movement required to animate still images into motion. I wanted to experiment with that — to feed it a first frame, a last frame, and have the model dream up the transition between them.
In my mind, the concept was cinematic and fluid. I imagined photographs dissolving into each other as if they were breathing, transforming, or remembering themselves differently. However, once I started actually working with Runway, I quickly realized that the workflow was not as smooth as I hoped. Instead of creating seamless transitions, the model kept hard-cutting from frame to frame rather than generating convincing in-between states. The results looked abrupt — like a slideshow rather than animation. Every prompt adjustment, model tweak, and rerender kept bringing me back to the same interruption. The vision I had didn’t align with what the tool was giving me. That was a frustrating moment, and it forced me to rethink the project entirely.
Rather than abandon the material, I decided to shift my goal. I pivoted away from trying to animate the photographs and instead reframed the project as an exploration of Runway as a creative tool — what it can do easily, what it struggles with, and what its aesthetic tendencies are. If the tool wouldn’t give me smooth transitions, then maybe the uncertainty and roughness could become part of the concept rather than a failure of it. So I started testing things. I experimented with weather, with time of day, and with different color filters. I gave Runway variations of the same image and watched how it interpreted them. I tried landscapes, interiors, different compositions. In the end, while some results were interesting, I was overall somewhat unimpressed. The images felt uncanny in a way that wasn’t visually satisfying, at least for this project. But even if the results weren’t perfect, I had now generated a substantial amount of experimental footage.
With hours of clips sitting on my drive, I reached another creative fork. I had material — just not the project I initially expected. Rather than throw it away, I decided to assemble it into something new. That’s when I opened Premiere Pro and started building a structure out of everything I gathered. I sequenced the shots, layered them, looked for rhythm and contrast. The piece became less about polished transitions and more about fragmentation, texture, and process. Runway’s imperfections became part of the aesthetic.
The next step was sound. To tie the visuals together, I turned to ElevenLabs, an AI voice synthesis tool. Instead of writing a script from the beginning, I improvised one as a reflection on the footage itself — almost like driving without a map and narrating the scenery. I recorded my voice and then created four distinct character voices within ElevenLabs. Using my own recordings as base material, I cloned and altered them so each voice felt like a variation of myself — familiar, yet slightly off. This created a hybrid dialogue between human input and machine modulation. The narration became a layered conversation, processed and re-performed through AI. It blurred authorship in a way I found compelling: I wrote the words, I recorded the sound, but the final voices were machine-shaped extensions of me.
After I finalized the voices, I added sound design using effects sourced from Soundly. Ambient textures, small audio cues, and subtle layers helped glue the piece together. The final video is a collage — of generated images, failed experiments, speculative voices, and creative salvage. The vision I ended with is completely different from the one I started with, but the process itself became the artwork. Instead of a seamless animation project, it turned into a document of trial, adaptation, and creative redirection.
Creating this project taught me something valuable about AI tools and authorship. AI is powerful, but it isn’t magic. It doesn’t replace artistic intent — it challenges it, redirects it, sometimes disappoints it, and occasionally surprises it. The human role is not just to operate the software but to respond to it. In my case, the most interesting outcome came after the original idea failed. The machine didn’t deliver what I imagined, but instead of stopping, I collaborated with its limitations. Runway generated visuals. ElevenLabs shaped my voice. I edited, curated, and made decisions. The final piece exists between human and machine — not a product of either alone.
In conclusion, this project is presented as a transparent, single-page reflection on process, authorship, and collaboration with AI. The video stands as the main artifact, supported by documentation and this written reflection. The failures informed the final direction just as much as the successes did, reminding me that sometimes the art lives not in what we set out to make, but in what we learned along the way.