Return to Comfyui Workflows (also accessible via "Workshop" in the menu bar)
"I'm Still Alive (In The Naked Disco Of My Mind)"
WORKFLOWS & PROJECT DETAILS
🎥 Description
A life of debauchery leading to self-reflection and a lobotomy. Set to a Zouk drum beat. The note in the final shots originally said "One Hour - Taylor Swift." I later changed it to, "One Hour - Undoing My Toxic Masculinity"
-
Music: "I'm Still Alive (In The Naked Disco Of My Mind)" by Mark DK Berry – Available on Bandcamp
-
Date Video Published: 31st Jan 2025
-
Movie clips used (in no particular order): Starship Troopers, Eyes Wide Shut, One Flew Over The Cuckoos Nest, Fear & Loathing in Las Vegas, Bad Lieutenant, Naked Lunch, American Beauty, Live & Let Die, It's All Gone Pete Tong
🎬 About the Project
This was done slightly differently to my other AI projects. I made a film montage using a method that I'd used before AI; of cutting movies together into new narratives. I originally finished the video in 2024, but wasn't happy with it. I felt it was missing something.
I'd used green screen trickery and Davinci Resolve mask-editing to add myself in it, which was kind of fun. To do that, I'd used an android phone and uniform coloured bed sheet back drop, and then green-screen rotoscoped myself using a free tier RunwayML login at the time.
When AI came along with new tools, I revisited the video, using Comfyui and an anime video workflow to convert it all to cartoonish effect. I then used Davinci to overlay that over the original in various opacities. I was happy with it after that.
🔧 Workflows & Tools Used
-
The below zip file contains x1 Comfyui json workflows:
- still_alive - video to anime workflow.json
Right-Click and "Save link as" to download the ZIP containing Comfyui json workflows
Here are the notes on the steps I took using the above workflow to convert the original video to an anime cartoonish version:
"The original footage mp4 music video was rendered at 1280 x 720
. I limited the anime version output to 720 x 480
due to time and VRAM constraints, but I might upscale it in Comfyui, or increase it in Davinci later if needed. (both are 24fps)
I added the "Nuke Anime" checkpoint model in trying to improve what I had, and it suggested to use "high res" as a trigger word.
- I then split the original mp4 into 10 second clips (24fps) = 240 frames each (720 x 480). using Ffmpeg to split mp4 video into 10 second clips like this:
ffmpeg -i input.mp4 -c copy -map 0 -segment_time 10 -f segment -reset_timestamps 1 output_%03d.mp4
-
that gave me 22 clips in total.
-
I ran the first 10 sec clip through workflow: started @ 12:45 ended @ 13:07 (about 25 minutes per 10 second clip)
Est time: 9 hours of rending time. (22 clips x 25 mins each) not including redos."
⏱️ Time & Energy Investment
-
There were so many returns to this project, I probably spent a couple of months on it before it was finally done. But I was doing a lot of filming of myself in positions, then green screen it out. That was done in 2024 to get the first cut finished and masking was fiddly in places to sit myself into the shots. Davinci Resolve was used for that and was really good once I figured it out.
-
The rendering out to anime happened in 2025, and was passed over each clip but would not have taken more than a couple of days. And then one more day for editing it together into a final cut in Davinci Resolve.
💻 Hardware
-
GPU: RTX 3060 (12GB VRAM)
-
RAM: 32GB
-
OS: Windows 10
All of it was done on a regular home PC but for the green-screen rotoscoping to mask myself out of video clips recorded on android phone, was done using RunwayML free tier.
🧰 Software Stack
-
ComfyUI (video to anime workflow)
-
RunwayML free tier used for greenscreen rotoscoping the phone recorded footage of myself.
-
Davinci Resolve 19 – final edit.
🎨 Loras Used & Trained
N/A
📺 Resolution & Rendering Details
Mostly working in 720 x 480 and 24 fps.
😵💫 Final Thoughts
I like the adaptation of existing videos into a sort of blemanche version with extras. I didn't like the original but just doing that somehow revitalised it. Maybe that is just me.
When I get the time, I am going to do the opposite with the original video of "Fallen Angel" to humanize it using a Comfyui video-to-video model.