the art of ice cream animation

Integrating ComfyUI and AI into the Creative Workflow - Image by Ardy Ala

Integrating ComfyUI and AI into the Creative Workflow

The primary goal behind developing these workflows is to explore a range of tools and establish an effective workflow that seamlessly integrates into my design and creative processes. I am eager to share the knowledge I gain along the way, and with each step, I strive to produce the highest quality work possible. Join our weekly newsletter for updates and case studies.

πŸ’‘Workflow:

The main effect originates from my previous project related to my Nike ad spot. I am repurposing one of the melting effects. Here are the key points:

  • I created a melting effect and extracted only the points from the simulation

  • After adjusting the particle scale, I rendered it from various camera angles using Redshift.

  • For all the shots, I rendered only the beauty pass, except for one.

  • For one specific shot, I created a very low-poly plate in Houdini to define the plate's position and rendered it alongside the particles. I also rendered the depth map for this shot as a precaution.

  • To guide the animation, I used a combination of ipadapter and a few control nets.

  • I experimented with ComfyUI’s β€œCOCO segmenter” to create a real-time mask from my renders to remove the background. This worked initially but failed towards the end of the sequence due to the nature of my element. Consequently, I had to remove the background in Houdini and render it again.

  • I created a mask setup to isolate the red or blue channel from my render, using it to mask a section of the original render. This masked section then served as a new source for my animation.

You can get away with very low-quality renders, but since I had already rendered some of these shots, I kept most of the setup.

Houdini / Redshift Render


I created a mask setup to isolate the red or blue channel from my render, using it to mask a section of the original render. This masked section then served as a new source for my animation.

Mask from Color


🚨a few things to consider:

Planning properly from the start is crucial to achieve the best results in AI.

  • It's usually better to avoid extreme Dutch camera angles unless there is a very good reason for using them.

  • Rendering without a ground plane can yield better results for the main subject. However, this approach may lead to some inaccuracies in perspective during interpolation in Stable Diffusion.

  • The background also plays a role. If anything other than the main subject is present in the z-depth pass, the results for the main subject might be compromised compared to a scenario where these additional assets are not loaded. Having only the main subject in the depth pass gives you more creative freedom with the prompt, as it provides more breathing room for creativity.

  • If you're not going to have a ground plane, avoid camera motions in your renders, because they won't translate into the camera; instead, they'll be reflected in your object.


comfyui checkpoint


The β€˜Dreamshaper’ checkpoint is a very powerful renderer. Although I have achieved very good results in some cases with Realistic Vision version 5 and version 5.1.

Lora Animation Model


AnimateDiff v3 model


I am using a LoRA for animation which is very important and this is the only one that is compatible with Animatediff model v3_sd15.

IPAdapter to implement the reference image

IPAdapter and portrait images:

Since Clipvision only processes square images for optimal IPAdapter performance, you would typically need to divide your image into two square sections, the left and right sides, and process them through two separate IPAdapters. However, I opted for a simpler approach by using a single IPAdapter and employing a prepare node to crop the image into a square format.


Improving the first animation

gif animation - final output

The initial animation from the first KSampler turned out quite well, yet the second iteration plays a crucial role in addressing artifacts and enhancing the smoothness of the interpolation.




Previous
Previous

animatediff layers in comfyui

Next
Next

CG Renders to AI ANIMATION - NIKE video