LTX 2.3 Model Downloads

All official LTX-2.3 checkpoints and quantized variants. Choose based on your GPU VRAM.

Main Checkpoints

LTX-2.3 Dev

Official
ltx-2.3-22b-dev.safetensors

Full dev model. Flexible and trainable. Recommended 32GB+ VRAM.

~42 GB32GB+ VRAM
Download on HuggingFace →

LTX-2.3 Distilled

Recommended
ltx-2.3-22b-distilled.safetensors

Distilled version. 8 steps, CFG=1. Faster inference, same quality.

~42 GB32GB+ VRAM
Download on HuggingFace →

LTX-2.3 Dev (FP8, Kijai)

16GB VRAM
ltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensors

FP8 quantized by Kijai. Runs on 16GB VRAM. Requires 40xx+ GPU for fp8 matmuls. Place in models/checkpoints/.

25 GB16GB+ VRAM
Download on HuggingFace →

LTX-2.3 Distilled (FP8 v3, Kijai)

16GB VRAM
ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors

FP8 distilled v3 by Kijai. Best for 16GB VRAM. 8 steps, CFG=1.

25 GB16GB+ VRAM
Download on HuggingFace →

Additional Files

Spatial Upscaler x2

ltx-2.3-spatial-upscaler-x2-1.0.safetensors

spatial upscaler x2 for two-stage pipelines. Place in models/latent_upscale_models/.

~1 GBDownload on HuggingFace →

LTX-2.3 VAE

Required
taeltx2_3.safetensors

VAE by Kijai. Required for ComfyUI workflows. Place in models/vae/.

~0.5 GBDownload on HuggingFace →