24GB46.1 GB24GB+ VRAMdistilled

ltx-2.3-22b-distilled-1.1.safetensors

LTX 2.3 Distilled 1.1 (bf16, 24GB)

Official v1.1 distilled model runnable on 24GB with sequential offloading enabled in ComfyUI.

Download ltx-2.3-22b-distilled-1.1.safetensors

Direct HuggingFace download. 46.1 GB · Free.

Install path: ComfyUI/models/checkpoints/ + ltx-2.3-22b-distilled-1.1.safetensors

No 24GB GPU? Try ltx-2.3-22b-distilled-1.1.safetensors online — free generation included

Skip the 46.1 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 24GB VRAM. Headroom up to: 31GB.

GPUVRAMVerdict
RTX 3060 12GB12GBInsufficient VRAM
RTX 4060 Ti / 4070 (16GB)16GBInsufficient VRAM
RTX 4070 Ti SUPER / 4080 (16GB)16GBInsufficient VRAM
RTX 3090 / 4090 (24GB)24GBTight fit
RTX 5090 / A6000 (32GB+)32GBComfortable

Recommendation: Enable sequential offloading in ComfyUI settings. Uses latest v1.1 official weights.

How to use ltx-2.3-22b-distilled-1.1.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/checkpoints/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try ltx-2.3-22b-distilled-1.1.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/checkpoints/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-22b-distilled-1.1.safetensors.

CUDA out of memory error when loading the model

ltx-2.3-22b-distilled-1.1.safetensors needs ~24GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

What CFG and step count should I use?

Distilled models work best with CFG=1 and 8 sampling steps. Higher CFG or more steps with a distilled checkpoint produces over-saturated output and wastes time.

Free newsletter

Get notified when LTX 2.3 Distilled 1.1 (bf16, 24GB) updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models