ltx-2.3-22b-distilled-1.1.safetensors
LTX 2.3 Distilled 1.1 (bf16, 24GB)
Official v1.1 distilled model runnable on 24GB with sequential offloading enabled in ComfyUI.
Download ltx-2.3-22b-distilled-1.1.safetensors
Direct HuggingFace download. 46.1 GB · Free.
No 24GB GPU? Try ltx-2.3-22b-distilled-1.1.safetensors online — free generation included
Skip the 46.1 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.
Will this run on my GPU?
Minimum: 24GB VRAM. Headroom up to: 31GB.
Recommendation: Enable sequential offloading in ComfyUI settings. Uses latest v1.1 official weights.
How to use ltx-2.3-22b-distilled-1.1.safetensors
- Download the file from HuggingFace.
- Place it in ComfyUI/models/checkpoints/ inside your ComfyUI directory.
- Restart ComfyUI (or refresh the model list from the menu).
- Load a compatible workflow — see below.
Compatible official workflows:
- LTX-2.3_T2V_I2V_Single_Stage_Distilled_Full.json— T2V / I2V Single Stage Distilled
- LTX-2.3_T2V_I2V_Two_Stage_Distilled.json— T2V / I2V Two Stage Distilled
- LTX-2.3_ICLoRA_Union_Control_Distilled.json— ICLoRA Union Control
Don't want to run this locally? Try ltx-2.3-22b-distilled-1.1.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.
Common issues
ComfyUI doesn't see the file after I downloaded it▼
Make sure the file is in ComfyUI/models/checkpoints/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-22b-distilled-1.1.safetensors.
CUDA out of memory error when loading the model▼
ltx-2.3-22b-distilled-1.1.safetensors needs ~24GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.
What CFG and step count should I use?▼
Distilled models work best with CFG=1 and 8 sampling steps. Higher CFG or more steps with a distilled checkpoint produces over-saturated output and wastes time.
Get notified when LTX 2.3 Distilled 1.1 (bf16, 24GB) updates
Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.
No spam. Sent occasionally when there's real news. Unsubscribe in one click.