How to Use LoRA with LTX 2.3 in ComfyUI
Step-by-step guide to loading and using LoRA weights with LTX 2.3 in ComfyUI, including the official rank-384 v1.1 LoRA and Kijai distilled LoRA for 16GB VRAM.
By ltx workflow
Editor's Note: This tutorial covers how to use LoRA weights with LTX 2.3 in ComfyUI, including the new official rank-384 v1.1 LoRA from Lightricks and Kijai's distilled LoRA for 16GB VRAM setups.
How to Use LoRA with LTX 2.3 in ComfyUI
LoRA (Low-Rank Adaptation) weights allow you to customize LTX 2.3's output style, subject, or motion characteristics without retraining the full model. This guide covers the two main LoRA options available for LTX 2.3.
Available LoRA Models
Official Rank-384 LoRA v1.1 (Lightricks)
- File:
ltx-2.3-22b-distilled-lora-384-1.1.safetensors - Size: 7.61 GB
- Use with: Dev model (
ltx-2.3-22b-dev.safetensorsor dev FP8) - Purpose: Applies distilled-quality output to the dev model
Kijai Distilled LoRA v1.1
- File:
ltx-2.3-22b-distilled-1.1_lora-dynamic_fro09_avg_rank_111_bf16.safetensors - Size: 2.74 GB
- Use with: Dev FP8 model on 16GB VRAM
- Purpose: Lightweight LoRA for distilled-quality output on 16GB VRAM
Installation
- Download your chosen LoRA file from HuggingFace
- Place it in your ComfyUI
models/loras/directory - The base checkpoint (dev model) goes in
models/checkpoints/
ComfyUI Workflow Setup
Step 1: Load the Base Model
Use the Load Checkpoint node and select the dev model:
ltx-2.3-22b-dev.safetensors(32GB VRAM)ltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensors(16GB VRAM)
Important: LoRA only works with the dev model, not the distilled checkpoint.
Step 2: Load the LoRA
Add a Load LoRA node after the checkpoint loader:
- Connect the MODEL output from Load Checkpoint to the MODEL input of Load LoRA
- Select your LoRA file
- Set strength:
1.0(start here, adjust if needed)
Step 3: Connect to Conditioning
Connect the LoRA MODEL output to your LTXVConditioning node as usual.
Step 4: Scheduler Settings
When using LoRA with the distilled LoRA:
- Use LTXVScheduler with
steps=8,cfg=1 - This mimics the distilled model behavior
For the official rank-384 LoRA:
- You can use more steps (20-30) with CFG guidance for more control
- Or use 8 steps with CFG=1 for distilled-style speed
Tips
- Don't use LoRA with the distilled checkpoint — it's already distilled and LoRA won't apply correctly
- Strength tuning: Values between 0.8–1.2 work best; too high causes artifacts
- 16GB VRAM: Use Kijai's dev FP8 + Kijai's distilled LoRA for the best balance of quality and speed
- 32GB VRAM: Use the official dev model + official rank-384 LoRA v1.1 for maximum quality
Troubleshooting
LoRA has no effect: Make sure you're using the dev model, not the distilled checkpoint.
Out of memory: Switch to the FP8 dev model and/or reduce resolution.
Artifacts at high strength: Lower LoRA strength to 0.8 or below.