CG Renders to AI ANIMATION - NIKE video

Integrating ComfyUI and AI into the Creative Workflow - Image by Ardy Ala

Integrating ComfyUI and AI into the Creative Workflow

The primary goal behind developing these workflows is to explore a range of tools and establish an effective workflow that seamlessly integrates into my design and creative processes. I am eager to share the knowledge I gain along the way, and with each step, I strive to produce the highest quality work possible. Join our weekly newsletter for updates and case studies.

In this example, I made a range of morphing effects in Houdini, from simple to more complex to use in ComfyUI.

I experimented with numerous render passes and AI models to achieve the best result.

It's important to acknowledge that there isn't a one-size-fits-all method for these AI animations; each style requires a unique approach. Moreover, the same problem can often be resolved through various techniques, so a universal animation tutorial might not be practical.

Now, without further ado, let's dive in:

πŸ’‘What is coming from Houdini?

  1. Beauty render: No shaders, Basic light setup

  2. Z-Depth: you will need to adjust the range in Nuke and export as PNG

  3. Wireframe (toon shader): this will be used by ControlNet to constrain and define the boundaries of each element within a scene.

I use Redshift so I experimented with using the RS curvature node to find the outline of the geometry and it worked well but I also combined it with my wireframe pass to get a better result.


🚨a few things to consider:

Planning properly from the start is crucial to achieve the best results in AI.

  • It's usually better to avoid extreme Dutch camera angles unless there is a very good reason for using them.

  • Rendering without a ground plane can yield better results for the main subject. However, this approach may lead to some inaccuracies in perspective during interpolation in Stable Diffusion.

  • The background also plays a role. If anything other than the main subject is present in the z-depth pass, the results for the main subject might be compromised compared to a scenario where these additional assets are not loaded. Having only the main subject in the depth pass gives you more creative freedom with the prompt, as it provides more breathing room for creativity.

  • Selecting the right checkpoint, motion model, and text prompt is extremely crucial. You could have the perfect setup, but tweaking your text prompt according to your checkpoint could yield significantly better results.


comfyui checkpoint


The β€˜Dreamshaper’ checkpoint is a very powerful renderer. Although I have achieved very good results in some cases with Realistic Vision version 5 and version 5.1.

Lora Animation Model


AnimateDiff v3 model


I am using a LoRA for animation which is very important and this is the only one that is compatible with Animatediff model v3_sd15.

IPAdapter to implement the reference image

IPAdapter and portrait images:

Since Clipvision only processes square images for optimal IPAdapter performance, you would typically need to divide your image into two square sections, the left and right sides, and process them through two separate IPAdapters. However, I opted for a simpler approach by using a single IPAdapter and employing a prepare node to crop the image into a square format.


Improving the first animation

gif animation - first iteration

The initial animation from the first KSampler turned out quite well, yet the second iteration plays a crucial role in addressing artifacts and enhancing the smoothness of the interpolation. In this particular case, I employed two ControlNets to preserve the shoe's structure.




Previous
Previous

the art of ice cream animation

Next
Next

CG Renders to AI workflow - concept