16GB LoRA2.74 GB16GB+ VRAMlora

ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors

LTX 2.3 Distilled 1.1 LoRA (Kijai)

Distilled LoRA v1.1 by Kijai. Use with the dev model for distilled-quality output on 16GB VRAM.

Download ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors

Direct HuggingFace download. 2.74 GB · Free.

Install path: ComfyUI/models/loras/ + ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors

No 16GB GPU? Try ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors online — free generation included

Skip the 2.74 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 16GB VRAM.

GPUVRAMVerdict
RTX 3060 12GB12GBInsufficient VRAM
RTX 4060 Ti / 4070 (16GB)16GBTight fit
RTX 4070 Ti SUPER / 4080 (16GB)16GBTight fit
RTX 3090 / 4090 (24GB)24GBComfortable
RTX 5090 / A6000 (32GB+)32GBComfortable

Recommendation: Pair with the dev FP8 model. Load as LoRA in ComfyUI models/loras/.

How to use ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/loras/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/loras/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors.

CUDA out of memory error when loading the model

ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors needs ~16GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

How do I apply this LoRA in ComfyUI?

Load it in a 'LoraLoader' node and connect it after your model loader. Pair this LoRA with the dev base model (not the distilled one) for the right behavior. LoRA strength 1.0 is the trained value — start there.

Free newsletter

Get notified when LTX 2.3 Distilled 1.1 LoRA (Kijai) updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models