Tools

AnimateDiff in ComfyUI: Complete Guide to AI Animation

Comprehensive guide to using AnimateDiff in ComfyUI for text-to-video and image-to-video animation, covering installation, workflow setup, and advanced techniques.

By ltx workflow

Editor's Note: This guide provides a complete walkthrough of AnimateDiff in ComfyUI, from installation to advanced animation techniques.

AnimateDiff ComfyUI Workflow/Tutorial - Stable Diffusion Animation

AnimateDiff offers an exciting way to transform your text ideas and images into animated GIFs and videos within the ComfyUI environment.

How Does AnimateDiff Work?

The core of AnimateDiff is a motion modeling module that learns about movement from various video clips. This module seamlessly integrates into pre-trained text-to-image models, enabling your static creations to move, dance, and animate.

AnimateDiff Versions Comparison

AnimateDiff V3: New Motion Module

AnimateDiff V3 represents an evolution in motion module technology with refined features:

  • Motion module: v3_sd15_mm.ckpt
  • Domain Adapter LoRA module for enhanced motion handling
  • Offers different types of motions compared to V2
  • Trained on static frames from video datasets

AnimateDiff SDXL

Designed for high-resolution videos:

  • Motion module: mm_sdxl_v10_beta.ckpt
  • Creates 1024x1024 resolution animations with 16 frames
  • Currently in Beta stage

AnimateDiff V2

Stable and widely-used version:

  • Motion module: mm_sd_v15_v2 (most recommended)
  • Supports Motion LoRA for camera dynamics
  • 32-frame default limit (expandable with context settings)

Installation in ComfyUI

Install AnimateDiff Node Extension

  1. Open ComfyUI
  2. Open the Manager panel (bottom-right corner)
  3. Click Install Node
  4. Search for AnimateDiff
  5. Select AnimateDiff-Evolved and install it

Note: AnimateDiff-Evolved is the most actively maintained and recommended version.

Download Motion Models

AnimateDiff requires motion model .ckpt files:

  • mm_sd_v14
  • mm_sd_v15
  • mm_sd_v15_v2 (most recommended)
  • v3_sd15_mm

Place files in the proper models folder under the AnimateDiff extension directory.

Core Node: Dynamic Diffusion Loader

All AnimateDiff workflows rely on the Dynamic Diffusion Loader node:

  • Path: New Node → AnimateDiff → Gel → Dynamic Diffusion Loader
  • Injects motion logic into the generation pipeline

Input Ports

  • Model: Must use SD1.5 models (SDXL currently unsupported)
  • Context Settings: Required for generating beyond default frame length
  • Dynamic LoRA: Optional; adds extra style or motion characteristics

Key Settings

Beta Schedule

Controls motion smoothness and temporal consistency

Motion Scale

Adjusts the intensity of motion in animations

Context Batch Size

Determines animation length - larger batch size = longer animations

Context Length

Defines the temporal window for motion coherence

Motion LoRA (V2 Only)

Enables camera dynamics control:

  • Zoom in/out
  • Pan left/right
  • Tilt up/down
  • Roll effects

Workflow Types

AnimateDiff V2 & V3: Text to Video

Explore AnimateDiff V3, AnimateDiff SDXL, and AnimateDiff V2 with upscale for high-resolution results.

AnimateDiff + IPAdapter V1: Image to Video

With IPAdapter, efficiently control animation generation using reference images.

AnimateDiff + Batch Prompt Schedule: Text to Video

Batch Prompt schedule offers precise control over narrative and visuals in animation creation.

AnimateDiff + ControlNet + IPAdapter V1: Cartoon Style

Convert original video into desired animation using only a few images to define preferred style.

Advanced Features

Prompt Travel / Prompt Scheduling

Create dynamic animations with changing prompts across frames for narrative progression.

Hires Fix

Enhance animation quality through upscaling and refinement techniques.

Model Compatibility

  • Model-agnostic: Most T2I models can be animated directly
  • Natural temporal consistency
  • Frame interpolation and unlimited sequence length
  • Compatible with ControlNet and standard sampling workflows
  • Creator-friendly: Keep using existing LoRAs, prompts, and checkpoints

Key Advantages

  1. Model-Agnostic: Works with most existing Stable Diffusion models
  2. Temporal Consistency: No "every frame looks different" issue
  3. Frame Interpolation: Smooth motion between keyframes
  4. ControlNet Compatible: Precise motion control
  5. Creator-Friendly: Use existing LoRAs, prompts, and checkpoints

AnimateDiff adds motion to your current models without changing how you work.

Sources

#animatediff#comfyui#animation#text-to-video#image-to-video#tutorial