Community

Community Reactions: LTX 2.3 Official FP8 and Upscaler Models

What the ComfyUI community is saying about the new official FP8 models and spatial/temporal upscalers for LTX 2.3.

By ltx workflow

Editor's Note: This article summarizes community reactions and tips from Reddit and Discord following the release of LTX 2.3's official FP8 models and new upscalers. The community has been actively testing and sharing results.

Community Reactions: LTX 2.3 Official FP8 and Upscaler Models

The release of official FP8 models from Lightricks and new upscaler variants has generated significant discussion in the ComfyUI and AI video generation communities.

Official FP8 Models: Community Verdict

When Lightricks released their own FP8 quantized variants (ltx-2.3-22b-dev-fp8.safetensors and ltx-2.3-22b-distilled-fp8.safetensors), the community quickly compared them to Kijai's existing FP8 variants.

Key findings from community testing:

  • Quality parity: Most users report the official FP8 and Kijai FP8 produce nearly identical output quality
  • Size difference: Official FP8 is ~29GB vs Kijai's ~25GB — the official version uses a different quantization approach
  • RTX 40xx requirement: Both variants require RTX 40-series for hardware fp8 matmuls; on older GPUs they fall back to slower emulation
  • Recommendation: For 16GB VRAM users, Kijai's v1.1 FP8 distilled remains the top pick due to smaller size

Spatial Upscaler x1.5 vs x2

The addition of the x1.5 spatial upscaler alongside the existing x2 has been well-received:

"The x1.5 upscaler is a game changer for me — x2 was too aggressive and introduced artifacts on fast motion. x1.5 hits the sweet spot." — community feedback

When to use each:

  • x1.5: Better for videos with fast motion, detailed textures, or when x2 produces artifacts
  • x2: Better for low-resolution base generations where maximum upscaling is needed

Temporal Upscaler x2

The temporal upscaler for frame interpolation has opened up new workflow possibilities:

  • Generate 25 frames → interpolate to 49 frames for smoother motion
  • Particularly effective for slow-motion style videos
  • Works best when combined with the spatial upscaler in a two-stage pipeline

LoRA v1.1 Official

The official rank-384 LoRA v1.1 from Lightricks has replaced community workarounds:

  • Cleaner integration with the dev model
  • Better quality than previous LoRA approaches
  • Place in models/loras/ and load via the LoRA loader node in ComfyUI

Tips from the Community

  1. Always use v1.1 models — the v1.0 distilled and FP8 variants are superseded
  2. Sequential offloading for 24GB: Enable in ComfyUI settings to run the full bf16 model
  3. Two-stage pipeline: Generate at 512×320, then upscale — saves VRAM and time
  4. CFG=1 for distilled: Don't change this — the distilled model is trained for CFG=1

Sources

#ltx-2.3#community#fp8#upscaler#reddit