~1 GB2GB+ VRAMlora

LTX23_video_vae_bf16.safetensors

LTX 2.3 Video VAE (Kijai)

Standalone video VAE BF16 by Kijai. Alternative to taeltx2_3. Place in models/vae/.

Download LTX23_video_vae_bf16.safetensors

Direct HuggingFace download. ~1 GB · Free.

Install path: ComfyUI/models/loras/ + LTX23_video_vae_bf16.safetensors

No 2GB GPU? Try LTX23_video_vae_bf16.safetensors online — free generation included

Skip the ~1 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 2GB VRAM.

GPUVRAMVerdict
RTX 3060 12GB12GBComfortable
RTX 4060 Ti / 4070 (16GB)16GBComfortable
RTX 4070 Ti SUPER / 4080 (16GB)16GBComfortable
RTX 3090 / 4090 (24GB)24GBComfortable
RTX 5090 / A6000 (32GB+)32GBComfortable

Recommendation: Alternative VAE option. Use in workflows that require the separate BF16 VAE component.

How to use LTX23_video_vae_bf16.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/loras/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try LTX23_video_vae_bf16.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/loras/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: LTX23_video_vae_bf16.safetensors.

CUDA out of memory error when loading the model

LTX23_video_vae_bf16.safetensors needs ~2GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

How do I apply this LoRA in ComfyUI?

Load it in a 'LoraLoader' node and connect it after your model loader. Pair this LoRA with the dev base model (not the distilled one) for the right behavior. LoRA strength 1.0 is the trained value — start there.

Free newsletter

Get notified when LTX 2.3 Video VAE (Kijai) updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models