LTX 2.3 Model Downloads
All official LTX-2.3 checkpoints and quantized variants. Choose based on your GPU VRAM.
Main Checkpoints
LTX-2.3 Dev
Officialltx-2.3-22b-dev.safetensorsFull dev model. Flexible and trainable. Recommended 32GB+ VRAM.
~42 GB32GB+ VRAM
Download on HuggingFace →LTX-2.3 Distilled
Recommendedltx-2.3-22b-distilled.safetensorsDistilled version. 8 steps, CFG=1. Faster inference, same quality.
~42 GB32GB+ VRAM
Download on HuggingFace →LTX-2.3 Dev (FP8, Kijai)
16GB VRAMltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensorsFP8 quantized by Kijai. Runs on 16GB VRAM. Requires 40xx+ GPU for fp8 matmuls. Place in models/checkpoints/.
25 GB16GB+ VRAM
Download on HuggingFace →LTX-2.3 Distilled (FP8 v3, Kijai)
16GB VRAMltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensorsFP8 distilled v3 by Kijai. Best for 16GB VRAM. 8 steps, CFG=1.
25 GB16GB+ VRAM
Download on HuggingFace →Additional Files
Spatial Upscaler x2
ltx-2.3-spatial-upscaler-x2-1.0.safetensorsspatial upscaler x2 for two-stage pipelines. Place in models/latent_upscale_models/.
~1 GBDownload on HuggingFace →LTX-2.3 VAE
Requiredtaeltx2_3.safetensorsVAE by Kijai. Required for ComfyUI workflows. Place in models/vae/.
~0.5 GBDownload on HuggingFace →