LTX 2.3 VRAM Requirements
16 GB VRAM is the minimum to run LTX 2.3, using the FP8 quantized distilled checkpoint (~25 GB file, loaded with ComfyUI sequential offloading). Full BF16 precision requires 48 GB+. RTX 40xx or newer is required for standard FP8; RTX 30xx users must use the MXFP8 block-32 variant.
Last updated: 2025-05-13
Quick answer by GPU
| GPU | VRAM | Status | Notes |
|---|---|---|---|
| RTX 3060 12 GB | 12 GB | Not supported | Insufficient VRAM for any LTX 2.3 checkpoint |
| RTX 3080 10 GB | 10 GB | Not supported | Insufficient VRAM |
| RTX 3080 Ti / 3090 24 GB | 24 GB | Supported (MXFP8 only) | Cannot run FP8 scaled — use mxfp8_block32 variant |
| RTX 4060 Ti 16 GB | 16 GB | Supported | FP8 scaled distilled — recommended starting point |
| RTX 4070 / 4070 Ti 12 GB | 12 GB | Not supported | 12 GB variants insufficient; 16 GB SUPER variant works |
| RTX 4070 SUPER / Ti SUPER 16 GB | 16 GB | Supported | FP8 scaled distilled |
| RTX 4080 16 GB | 16 GB | Supported | FP8 scaled distilled or dev |
| RTX 4090 24 GB | 24 GB | Supported | FP8 distilled or dev-fp8 (29 GB needs slight offloading) |
| RTX 5090 32 GB | 32 GB | Fully supported | FP8 distilled resident, BF16 distilled with offloading |
| A6000 48 GB | 48 GB | Fully supported | All checkpoints including BF16 dev |
| A100 40 GB | 40 GB | Supported | BF16 distilled with offloading; FP8 dev resident |
| H100 80 GB | 80 GB | Fully supported | All checkpoints comfortably resident |
VRAM by checkpoint file
All LTX 2.3 checkpoint files and their minimum VRAM requirements.
| File | File size | Min VRAM | Notes |
|---|---|---|---|
ltx-2.3-22b-distilled-1.1_transformer_only_fp8_scaled.safetensorsRecommended | 25 GB | 16 GB | Requires RTX 40xx+ for FP8 matmul. Best default for 16 GB. |
ltx-2.3-22b-distilled-1.1_transformer_only_mxfp8_block32.safetensors | ~25 GB | 16 GB | MXFP8 block-32 — use when standard FP8 scaled is unsupported on your GPU. |
ltx-2.3-22b-dev_transformer_only_fp8_scaled.safetensors | ~25 GB | 16 GB | Dev model FP8 — use when you need LoRA support on 16 GB. Requires RTX 40xx+. |
ltx-2.3-22b-distilled-fp8.safetensorsRecommended | 29.5 GB | 32 GB | Official Lightricks FP8 distilled. Full checkpoint with embedded VAE and audio VAE. |
ltx-2.3-22b-dev-fp8.safetensors | 29.1 GB | 24 GB | Official Lightricks FP8 dev. Use for quality/LoRA workflows on 24 GB+. |
ltx-2.3-22b-distilled-1.1.safetensors | 46.1 GB | 32 GB (offloading) | Official BF16 distilled. Requires sequential offloading on 32 GB — file is 46 GB. Best quality at full precision. |
ltx-2.3-22b-dev.safetensors | ~42 GB | 48 GB (or 32 GB + offloading) | Full BF16 dev. Use only for LoRA training. Sequential offloading required on 32 GB. |
Frequently asked questions
What is the minimum VRAM to run LTX 2.3?▼
16 GB VRAM is the minimum, using the FP8 quantized checkpoint (ltx-2.3-22b-distilled-1.1_transformer_only_fp8_scaled.safetensors, 25 GB file). This requires an RTX 40-series GPU (Ada Lovelace) for native FP8 matmul support. RTX 30xx users should use the MXFP8 block-32 variant instead.
Can I run LTX 2.3 on an RTX 3090?▼
Yes, but not with the standard FP8 scaled checkpoint. The RTX 3090 has 24 GB VRAM but does not support FP8 scaled matmul (an RTX 40xx Ada Lovelace feature). Use the MXFP8 block-32 variant: ltx-2.3-22b-distilled-1.1_transformer_only_mxfp8_block32.safetensors.
Does LTX 2.3 run on 8 GB or 12 GB VRAM?▼
No. The smallest LTX 2.3 checkpoint is 25 GB and requires at least 16 GB VRAM to load (the remaining data is offloaded by ComfyUI's sequential offloading). 8 GB and 12 GB GPUs cannot run LTX 2.3.
Do I need 32 GB VRAM for LTX 2.3?▼
No. 16 GB VRAM is sufficient for the FP8 distilled checkpoint. 32 GB is needed only if you want to run the full BF16 checkpoints (46 GB distilled, 42 GB dev) — and even then, sequential offloading is required because the files exceed 32 GB.
What is taeltx2_3.safetensors and where does it go?▼
taeltx2_3.safetensors is the Tiny AutoEncoder VAE (Video AutoEncoder) required by all LTX 2.3 ComfyUI workflows. Without it, ComfyUI cannot decode the latent output to video frames. Place it in ComfyUI/models/vae/. Download from https://huggingface.co/Kijai/LTX2.3_comfy.