NewsApril 27, 2026

LTX 2.3 Official FP8 Models: Dev and Distilled Now Available from Lightricks

Lightricks releases official FP8 quantized variants of LTX 2.3 — 29.1GB dev and 29.5GB distilled models that run on 16GB VRAM without third-party quantization.

By ltx workflow

Editor's Note: Lightricks has officially released FP8 quantized versions of LTX 2.3 — both the dev and distilled variants — enabling high-quality audio-video generation on 16GB VRAM GPUs without relying on third-party quantization tools.

LTX 2.3 Official FP8 Models: Dev and Distilled Now Available from Lightricks

Lightricks has published official FP8 quantized checkpoints for LTX 2.3, their DiT-based audio-video foundation model. These models are now available directly on HuggingFace at Lightricks/LTX-2.3-fp8, making it easier than ever to run state-of-the-art video generation on consumer hardware.

LTX 2.3 FP8 Official Models

What Are the New FP8 Models?

Two official FP8 checkpoints are now available:

ModelDescription
ltx-2.3-22b-dev-fp8.safetensorsThe full 22B parameter model, flexible and trainable, quantized to FP8
ltx-2.3-22b-distilled-fp8.safetensorsThe distilled version, 8-step inference, CFG=1, quantized to FP8

FP8 (8-bit floating point) quantization reduces memory requirements significantly compared to the BF16 originals, allowing these 22B parameter models to run on GPUs with 16GB VRAM — such as the RTX 3080, RTX 4080, or RTX 4090.

Why Official FP8 Matters

Previously, FP8 variants of LTX 2.3 were only available through community contributors like Kijai. While those community FP8 models worked well, having official Lightricks FP8 releases brings several advantages:

  • Validated quality: Lightricks has tested and verified these quantized models maintain output quality
  • Ongoing support: Official models will receive updates alongside the base model
  • Simplified workflow: No need to quantize yourself or rely on third-party conversions
  • ComfyUI compatibility: Works with the built-in LTXVideo nodes via ComfyUI Manager

About LTX 2.3

LTX 2.3 is a significant update to the LTX-2 model, featuring improved audio and visual quality as well as enhanced prompt adherence. It is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model — a key differentiator from video-only generation models.

Key technical details:

  • Architecture: Diffusion Transformer (DiT)
  • Parameters: 22 billion
  • Capabilities: Joint audio-video generation
  • Developer: Lightricks
  • License: Permissive for commercial and personal use

Running the FP8 Models in ComfyUI

Lightricks recommends using the built-in LTXVideo nodes available through ComfyUI Manager. For the distilled FP8 model, use these settings for best results:

  • Steps: 8
  • CFG scale: 1.0
  • Resolution: Width and height must be divisible by 32
  • Frame count: Must be divisible by 8, plus 1 (e.g., 25, 33, 49, 97 frames)

Dev vs. Distilled: Which Should You Use?

Use ltx-2.3-22b-dev-fp8 if you want:

  • Maximum quality and flexibility
  • Fine-tuning or LoRA training capability (note: Lightricks recommends training the BF16 model; FP8 training recipes are a community contribution opportunity)
  • Full CFG control for creative exploration

Use ltx-2.3-22b-distilled-fp8 if you want:

  • Faster generation (8 steps vs. full model)
  • Lower VRAM usage during inference
  • Quick iteration and prototyping

Official FP8 vs. Kijai FP8 Variants

Community contributor Kijai has also published FP8 variants at Kijai/LTX2.3_comfy. Both sets of models are valid options:

  • Official Lightricks FP8: Validated by the model authors, will receive official support
  • Kijai FP8: Community-tested, optimized for ComfyUI workflows, may include additional variants

For most users, the official Lightricks FP8 models are now the recommended starting point.

Getting Started

  1. Download the models from Lightricks/LTX-2.3-fp8 on HuggingFace
  2. Place them in your ComfyUI models/ directory
  3. Install or update the LTXVideo nodes via ComfyUI Manager
  4. Load the model and start generating

For PyTorch users, the codebase is available at github.com/Lightricks/LTX-2, requiring Python >=3.12, CUDA >12.7, and PyTorch ~2.7.

Sources

#ltx-2.3#fp8#lightricks#comfyui#16gb-vram

Related Articles