Getting Started with LTX 2.3: Complete Tutorial
LTX 2.3 is Lightricks' latest text-to-video model, offering unprecedented quality and control. This tutorial will guide you through the complete workflow from i
Getting Started with LTX 2.3: Complete Tutorial
LTX 2.3 is Lightricks' latest text-to-video model, offering unprecedented quality and control. This tutorial will guide you through the complete workflow from installation to rendering your first video.
Prerequisites
Before starting, ensure you have:
- NVIDIA GPU with at least 12GB VRAM (16GB+ recommended)
- ComfyUI installed and configured
- Basic understanding of AI video generation concepts
Step 1: Download Required Models
Visit our Models page and download:
# Required files (all VRAM tiers)
ltx-video-2b-v0.9.safetensors # Main checkpoint
vae_diffusion_pytorch_model.safetensors # VAE decoder
spatial_upscaler_v0.1.safetensors # Spatial upscaler
For VRAM-optimized versions, check the VRAM guide.
Step 2: Install ComfyUI Nodes
LTX 2.3 requires specific custom nodes:
cd ComfyUI/custom_nodes
git clone https://github.com/Lightricks/ComfyUI-LTXVideo
cd ComfyUI-LTXVideo
pip install -r requirements.txt
Restart ComfyUI after installation.
Step 3: Load the Workflow
Download our starter workflow and import it into ComfyUI:
- Open ComfyUI in your browser
- Click "Load" button
- Select the downloaded JSON file
- Verify all nodes are green (no missing dependencies)
Step 4: Configure Your Prompt
The prompt is crucial for quality results. Follow these guidelines:
Good prompts:
- Specific and descriptive
- Include camera movement (pan, zoom, dolly)
- Mention lighting and atmosphere
- Specify subject actions
Example:
A majestic eagle soaring through misty mountain peaks at golden hour,
camera slowly panning right, cinematic lighting, 4K quality
Step 5: Adjust Generation Settings
Key parameters to tune:
| Parameter | Recommended | Effect |
|---|---|---|
| Steps | 30-50 | Higher = better quality, slower |
| CFG Scale | 7-12 | Prompt adherence strength |
| Resolution | 768x512 | Balance quality/VRAM |
| Frames | 121-161 | Video length (5-6.7 seconds) |
Step 6: Generate Your Video
- Click "Queue Prompt" in ComfyUI
- Monitor VRAM usage in Task Manager
- Wait for generation (2-5 minutes depending on settings)
- Find output in
ComfyUI/output/folder
Troubleshooting
Out of Memory Error:
- Reduce resolution to 512x512
- Lower frame count to 97
- Use FP8 quantized models
- Enable
--lowvramflag in ComfyUI launch
Poor Quality Results:
- Increase steps to 40+
- Adjust CFG scale (try 8-10)
- Refine prompt with more details
- Check model files are not corrupted
Next Steps
Now that you've generated your first video:
- Experiment with advanced workflows
- Try LoRA fine-tuning
- Join the community on Discord for tips
Happy creating!