LTX 2.3 ComfyUI Setup Guide
How to install LTX-2.3, download models, and generate AI videos with ComfyUI.
1. Install ComfyUI
Clone the ComfyUI repository and install dependencies. Requires Python 3.10+ and a CUDA-capable GPU with 16GB+ VRAM.
git clone https://github.com/comfyanonymous/ComfyUI cd ComfyUI pip install -r requirements.txt
2. Install ComfyUI-LTXVideo Nodes
Open ComfyUI Manager, search for "LTXVideo", and install the official Lightricks nodes. Or clone manually:
cd ComfyUI/custom_nodes git clone https://github.com/Lightricks/ComfyUI-LTXVideo
3. Download LTX-2.3 Model
Choose the right model for your VRAM. Place checkpoint files in ComfyUI/models/checkpoints/.
- 32GB+ VRAM:ltx-2.3-22b-dev.safetensors or ltx-2.3-22b-distilled.safetensors (official, ~42GB)
- 16GB VRAM:ltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensors (FP8 by Kijai, ~25GB, requires 40xx+ GPU)
4. Download Required VAE
The TAE (Tiny AutoEncoder) is required for all LTX-2.3 workflows. Place in ComfyUI/models/vae/.
# Download from: https://huggingface.co/Kijai/LTX2.3_comfy # File: taeltx2_3.safetensors → ComfyUI/models/vae/
5. Load a Workflow
Use the workflow generator on this site to create a ComfyUI JSON workflow, or download official example workflows from the ComfyUI-LTXVideo repository. Drag the JSON file into ComfyUI to load it.
6. Key Parameters
- Resolution:Must be divisible by 32. Recommended: 768×512 or 1280×720
- Frames:Must be 8n+1: 25, 49, or 97 frames
- Steps (Distilled):8 steps max, CFG=1
- Steps (Dev):20–50 steps, CFG=3–7
- Scheduler:euler recommended for most cases