LTX 2.3 VRAM Requirements

16 GB VRAM is the minimum to run LTX 2.3, using the FP8 quantized distilled checkpoint (~25 GB file, loaded with ComfyUI sequential offloading). Full BF16 precision requires 48 GB+. RTX 40xx or newer is required for standard FP8; RTX 30xx users must use the MXFP8 block-32 variant.

Last updated: 2025-05-13

Quick answer by GPU

GPUVRAMStatusNotes
RTX 3060 12 GB12 GBNot supportedInsufficient VRAM for any LTX 2.3 checkpoint
RTX 3080 10 GB10 GBNot supportedInsufficient VRAM
RTX 3080 Ti / 3090 24 GB24 GBSupported (MXFP8 only)Cannot run FP8 scaled — use mxfp8_block32 variant
RTX 4060 Ti 16 GB16 GBSupportedFP8 scaled distilled — recommended starting point
RTX 4070 / 4070 Ti 12 GB12 GBNot supported12 GB variants insufficient; 16 GB SUPER variant works
RTX 4070 SUPER / Ti SUPER 16 GB16 GBSupportedFP8 scaled distilled
RTX 4080 16 GB16 GBSupportedFP8 scaled distilled or dev
RTX 4090 24 GB24 GBSupportedFP8 distilled or dev-fp8 (29 GB needs slight offloading)
RTX 5090 32 GB32 GBFully supportedFP8 distilled resident, BF16 distilled with offloading
A6000 48 GB48 GBFully supportedAll checkpoints including BF16 dev
A100 40 GB40 GBSupportedBF16 distilled with offloading; FP8 dev resident
H100 80 GB80 GBFully supportedAll checkpoints comfortably resident

VRAM by checkpoint file

All LTX 2.3 checkpoint files and their minimum VRAM requirements.

FileFile sizeMin VRAMNotes
ltx-2.3-22b-distilled-1.1_transformer_only_fp8_scaled.safetensorsRecommended25 GB16 GBRequires RTX 40xx+ for FP8 matmul. Best default for 16 GB.
ltx-2.3-22b-distilled-1.1_transformer_only_mxfp8_block32.safetensors~25 GB16 GBMXFP8 block-32 — use when standard FP8 scaled is unsupported on your GPU.
ltx-2.3-22b-dev_transformer_only_fp8_scaled.safetensors~25 GB16 GBDev model FP8 — use when you need LoRA support on 16 GB. Requires RTX 40xx+.
ltx-2.3-22b-distilled-fp8.safetensorsRecommended29.5 GB32 GBOfficial Lightricks FP8 distilled. Full checkpoint with embedded VAE and audio VAE.
ltx-2.3-22b-dev-fp8.safetensors29.1 GB24 GBOfficial Lightricks FP8 dev. Use for quality/LoRA workflows on 24 GB+.
ltx-2.3-22b-distilled-1.1.safetensors46.1 GB32 GB (offloading)Official BF16 distilled. Requires sequential offloading on 32 GB — file is 46 GB. Best quality at full precision.
ltx-2.3-22b-dev.safetensors~42 GB48 GB (or 32 GB + offloading)Full BF16 dev. Use only for LoRA training. Sequential offloading required on 32 GB.

Frequently asked questions

What is the minimum VRAM to run LTX 2.3?

16 GB VRAM is the minimum, using the FP8 quantized checkpoint (ltx-2.3-22b-distilled-1.1_transformer_only_fp8_scaled.safetensors, 25 GB file). This requires an RTX 40-series GPU (Ada Lovelace) for native FP8 matmul support. RTX 30xx users should use the MXFP8 block-32 variant instead.

Can I run LTX 2.3 on an RTX 3090?

Yes, but not with the standard FP8 scaled checkpoint. The RTX 3090 has 24 GB VRAM but does not support FP8 scaled matmul (an RTX 40xx Ada Lovelace feature). Use the MXFP8 block-32 variant: ltx-2.3-22b-distilled-1.1_transformer_only_mxfp8_block32.safetensors.

Does LTX 2.3 run on 8 GB or 12 GB VRAM?

No. The smallest LTX 2.3 checkpoint is 25 GB and requires at least 16 GB VRAM to load (the remaining data is offloaded by ComfyUI's sequential offloading). 8 GB and 12 GB GPUs cannot run LTX 2.3.

Do I need 32 GB VRAM for LTX 2.3?

No. 16 GB VRAM is sufficient for the FP8 distilled checkpoint. 32 GB is needed only if you want to run the full BF16 checkpoints (46 GB distilled, 42 GB dev) — and even then, sequential offloading is required because the files exceed 32 GB.

What is taeltx2_3.safetensors and where does it go?

taeltx2_3.safetensors is the Tiny AutoEncoder VAE (Video AutoEncoder) required by all LTX 2.3 ComfyUI workflows. Without it, ComfyUI cannot decode the latent output to video frames. Place it in ComfyUI/models/vae/. Download from https://huggingface.co/Kijai/LTX2.3_comfy.