Designed to be visually compact and simplified for ease of use. Personally, I think this is the best streamlined workflow there is. You can save notes of prompts and LORA triggers right beside the prompt input, making it quick and easy to swap between prompts and reference LORA triggers.
The overall layout is designed to waste as little space as possible, while fitting very well into the ComfyUI workflow window at a 16:9 ratio. It's designed so you don't have to constantly rescale or move the workflow around in order to change settings. If you "fit to view" and click the zoom in button 2 or 3 times, it will fit perfectly with little wasted space.
_________
This workflow generates a 5 second 480x480 video in ~120 seconds on a 4070ti with the Q8 GGUF model without Sage Attention enabled.
This workflow does not use tricks like upscaling and uses mostly basic nodes and extensions, it should be very easy to get working with minimal effort.
This workflow uses LCM sampling with the LIGHTX2V LORA to speedup generation time. In the current design, two additional LORAs can be used at the same time.
_________
The main settings that you may want to change would be primarily just output resolution or sampler steps. Other samplers or schedulers may work, but I find LCM/Simple provides the most coherent output. The only other setting you might want to fiddle with is the LORA strengths. There are however other settings you can fiddle with, such as "SHIFT", which can somewhat work like a CFG setting. In my experience, it can be used to drastically change how a prompt/LORA is expressed, while also creating more dramatic changes in movements, but generally this should be left at its default setting.
_________
Note: Sage Attention is disabled by default. To enable Sage Attention (if you have the pre-requisites installed) simply select the "Enable for Sage Attention" node and press Ctrl+B to enable it, then below it change the "sage_attention" option from disabled to enabled. Even if you don't plan on using Sage Attention, you will still need to install the extension for the workflow to operate.
_________
Required models:
GGUF i2v models: https://huggingface.co/city96/Wan2.1-I2V-14B-480P-gguf/tree/main
CLIP Vision model: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/clip_vision/clip_vision_h.safetensors
LIGHTX2V Speedup LORA model: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Or the new proper I2V LIGHTX2V LORA model (ok at 4 steps, but works amazing at 8 steps):
• Same design as previously
• Geared toward running the WAN 2.2 Low Noise model only
• See main page or below for more information and see "required models" section for new requirements
• Light X2V LoRA works at a strength of 1.1 to 2.0, and can dramatically alter the behavior of the model, in both beneficial or detrimental ways, so after testing I chose 1.5 as the default strength since that seemed to be the most reliable, but experiment.
Note: I have since found a Light X2V strength of 1.688 comes closest to WAN 2.1 behavior, but still not perfect.
• WAN 2.2 is much more dynamic, which means it requires a slightly different prompting style than you might have used in WAN 2.1. The same goes for its affect on LoRA's, where they tend to be amplified in strength, which can be a good and bad thing, but overall I'm seeing some pretty good results with lots of keepers. So the main differences to get good results is to focus on learning how to prompt it and you may also need to tinker with LoRA strengths, depending on the LoRA and how it's behaving with your prompt and image input. Even changing it to 6 or 8 steps can also potentially improve results.
• There can be some bad generations that go off the rails, but all in all once you dial things in, WAN 2.2 can generate a lot of keepers that you could never get with WAN 2.1.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.