I recently put together an animation using Kaiber (kaiber.ai), a generative tool based on Stable Diffusion, the well-known text-image deep learning model. The video is for ‘Going Across The Wind‘, my new track.
Kaiber operates with prompts much like Stable Diffusion. I played around with prompts like ‘desolate/ruined/wet city at night’ and added in ‘riot,’ ‘flames,’ and ‘smoke’ for a bit of extra flair. (Fun times over at Andy Rray!)
One cool feature of Kaiber is the use of ‘storyboards.’ These let me switch up prompts during the animation. I timed new prompts with shifts in the music, creating unique transitions. Kaiber responded by generating some intriguing and occasionally beautiful changes at those points.
However, I’ll admit that I had a bit of trouble getting Kaiber to transition exactly when I wanted due to user error. The transitions ended up happening at odd moments in the timeline, out of sync with the music. To get the final version, I had to edit together a few different full-length renders.
For those curious about Kaiber, check out Tim at Theoretically Media’s video to learn more about using it for your own music visualizers:
Mastering Kaiber AI : Ultimate Tutorial and Deep Dive (AI Text to Video & Video to Video)
You can see my video here.