1.09 GB4GB+ VRAMlora

ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors

Spatial Upscaler x1.5

Spatial upscaler x1.5 for two-stage pipelines. Place in models/latent_upscale_models/.

Download ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors

Direct HuggingFace download. 1.09 GB · Free.

Install path: ComfyUI/models/loras/ + ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors

No 4GB GPU? Try ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors online — free generation included

Skip the 1.09 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 4GB VRAM.

GPUVRAMVerdict
RTX 3060 12GB12GBComfortable
RTX 4060 Ti / 4070 (16GB)16GBComfortable
RTX 4070 Ti SUPER / 4080 (16GB)16GBComfortable
RTX 3090 / 4090 (24GB)24GBComfortable
RTX 5090 / A6000 (32GB+)32GBComfortable

Recommendation: Use when x2 upscale is too aggressive. Gentler upscaling option.

How to use ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/loras/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/loras/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors.

CUDA out of memory error when loading the model

ltx-2.3-spatial-upscaler-x1.5-1.0.safetensors needs ~4GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

How do I apply this LoRA in ComfyUI?

Load it in a 'LoraLoader' node and connect it after your model loader. Pair this LoRA with the dev base model (not the distilled one) for the right behavior. LoRA strength 1.0 is the trained value — start there.

Free newsletter

Get notified when Spatial Upscaler x1.5 updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models