ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors
LTX 2.3 Distilled FP8 v2 (Kijai)
FP8 distilled v2 by Kijai. Superseded by v3.
Download ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors
Direct HuggingFace download. ~25 GB · Free.
No 16GB GPU? Try ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors online — free generation included
Skip the ~25 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.
Will this run on my GPU?
Minimum: 16GB VRAM. Headroom up to: 24GB.
⚠ FP8 matmul requires RTX 40-series or newer. Older cards (3090/3060) cannot use this format.
Recommendation: Previous version. Use FP8 v3 or v1.1 FP8 for better quality.
How to use ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors
- Download the file from HuggingFace.
- Place it in ComfyUI/models/checkpoints/ inside your ComfyUI directory.
- Restart ComfyUI (or refresh the model list from the menu).
- Load a compatible workflow — see below.
Compatible official workflows:
- LTX-2.3_T2V_I2V_Single_Stage_Distilled_Full.json— T2V / I2V Single Stage Distilled
- LTX-2.3_T2V_I2V_Two_Stage_Distilled.json— T2V / I2V Two Stage Distilled
- LTX-2.3_ICLoRA_Union_Control_Distilled.json— ICLoRA Union Control
Don't want to run this locally? Try ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.
Common issues
ComfyUI doesn't see the file after I downloaded it▼
Make sure the file is in ComfyUI/models/checkpoints/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors.
I get a CUDA error mentioning fp8 / scaled / matmul▼
FP8 scaled matmuls require an RTX 40-series GPU or newer (Ada Lovelace architecture). RTX 30-series and older cannot run FP8 weights at native precision. Use the BF16 variant instead, or the MXFP8 block-32 alternative.
CUDA out of memory error when loading the model▼
ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v2.safetensors needs ~16GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.
Get notified when LTX 2.3 Distilled FP8 v2 (Kijai) updates
Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.
No spam. Sent occasionally when there's real news. Unsubscribe in one click.
Related models
ltx-2.3-22b-distilled-1.1_transformer_only_fp8_scaled.safetensors
ltx-2.3-22b-distilled-1.1_transformer_only_mxfp8_block32.safetensors
ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors
ltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensors
ltx-2.3-22b-dev_transformer_only_fp8_scaled.safetensors
ltx-2.3-22b-dev_transformer_only_mxfp8_block32.safetensors