~0.5 GB2GB+ VRAMlora

ltx-2.3_text_projection_bf16.safetensors

LTX 2.3 Text Projection (Kijai)

Text projection BF16 component by Kijai. Place in models/text_encoders/.

Download ltx-2.3_text_projection_bf16.safetensors

Direct HuggingFace download. ~0.5 GB · Free.

Install path: ComfyUI/models/loras/ + ltx-2.3_text_projection_bf16.safetensors

No 2GB GPU? Try ltx-2.3_text_projection_bf16.safetensors online — free generation included

Skip the ~0.5 GB download and ComfyUI setup. Generate a 5-second video using this exact model in your browser, ~30 seconds.

Try this model online — free →

Will this run on my GPU?

Minimum: 2GB VRAM.

GPUVRAMVerdict
RTX 3060 12GB12GBComfortable
RTX 4060 Ti / 4070 (16GB)16GBComfortable
RTX 4070 Ti SUPER / 4080 (16GB)16GBComfortable
RTX 3090 / 4090 (24GB)24GBComfortable
RTX 5090 / A6000 (32GB+)32GBComfortable

Recommendation: Required for workflows using separate text encoder components.

How to use ltx-2.3_text_projection_bf16.safetensors

  1. Download the file from HuggingFace.
  2. Place it in ComfyUI/models/loras/ inside your ComfyUI directory.
  3. Restart ComfyUI (or refresh the model list from the menu).
  4. Load a compatible workflow — see below.

Compatible official workflows:

Don't want to run this locally? Try ltx-2.3_text_projection_bf16.safetensors online with a free generation — no GPU, no install, ~30 seconds per clip.

Common issues

ComfyUI doesn't see the file after I downloaded it

Make sure the file is in ComfyUI/models/loras/ (not a subfolder). Restart ComfyUI fully — the menu refresh sometimes misses new files. Filename must match exactly: ltx-2.3_text_projection_bf16.safetensors.

CUDA out of memory error when loading the model

ltx-2.3_text_projection_bf16.safetensors needs ~2GB VRAM minimum. If you're hitting OOM: • Enable Sequential Offloading in ComfyUI settings • Lower the resolution (512x512 instead of 1280x720) • Reduce frame count (25 frames instead of 97) • Use a smaller variant — see Related models below.

How do I apply this LoRA in ComfyUI?

Load it in a 'LoraLoader' node and connect it after your model loader. Pair this LoRA with the dev base model (not the distilled one) for the right behavior. LoRA strength 1.0 is the trained value — start there.

Free newsletter

Get notified when LTX 2.3 Text Projection (Kijai) updates

Occasional updates on what's new in LTX 2.3 — new FP8 quants, LoRAs, IC-LoRA releases — with our hands-on verdict on whether they're worth re-downloading. No fixed cadence.

No spam. Sent occasionally when there's real news. Unsubscribe in one click.

Related models