~25 GB16GB+ VRAMfp8

ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors

LTX 2.3 Distilled FP8 v1 (Kijai)

FP8 distilled v1 by Kijai. Earliest FP8 release, superseded by v3.

Download ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors

Direct HuggingFace download. ~25 GB · Free.

Install path: ComfyUI/models/checkpoints/ + ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors

No 16GB GPU? Try ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors online — free generation included

Skip the ~25 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 16GB VRAM. Headroom up to: 24GB.

GPUVRAMVerdict
RTX 3060 12GB12GBInsufficient VRAM
RTX 4060 Ti / 4070 (16GB)16GBTight fit
RTX 4070 Ti SUPER / 4080 (16GB)16GBTight fit
RTX 3090 / 4090 (24GB)24GBComfortable
RTX 5090 / A6000 (32GB+)32GBComfortable

FP8 matmul requires RTX 40-series or newer. Older cards (3090/3060) cannot use this format.

Recommendation: Previous version. Use FP8 v3 or v1.1 FP8 for better quality.

How to use ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/checkpoints/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/checkpoints/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors.

I get a CUDA error mentioning fp8 / scaled / matmul

FP8 scaled matmuls require an RTX 40-series GPU or newer (Ada Lovelace architecture). RTX 30-series and older cannot run FP8 weights at native precision. Use the BF16 variant instead, or the MXFP8 block-32 alternative.

CUDA out of memory error when loading the model

ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors needs ~16GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

Free newsletter

Get notified when LTX 2.3 Distilled FP8 v1 (Kijai) updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models