This is the official workflow hub for the Wan2.1_14B_FusionX models. Here youโll find a full set of workflows designed to unlock the modelโs potential across a range of generation types, including:
๐ฌ Text-to-Video (T2V) โ Available now. Just drag and drop the PNG file into ComfyUI. (Iโve included a sample video created with the current settings in the folder.)
๐ผ๏ธ Image-to-Video (I2V) โ Available now. Drag and drop png into comfyUI. I Included the start frame from the example video if you want to test it. (Please note: The Wrapper version supports start AND end frame. Native only supports start frame.)
โ ๏ธ NOTE: for image to video, to get up to a 50% increate in overall motion in the video, set your frame count to 121 and FPS to 24!!! After some testing this really helps!
๐ป Phantom (Multi-image Fusion to Video) โ Wrapper only. Native TBD
๐ง VACE (Context-aware Video + Image-to-Video) โ Coming soon.
Each category will include both:
โ Native Workflows โ Built directly with WAN components for full control and customization.
๐ Wrapper Workflows(recommended) โ Uses the Kijai Wrapper for optimized generation speed.
These are the same workflows used in all demo videos on the model's main page โ no extra LoRAs, upscaling, or interpolation. Just clean, raw model outputs with the right settings.
โ ๏ธ All required components (e.g., CausVid, AccVideo, MPS LoRAs) are already baked into the model. Do not re-add them unless you know what you're doing.
Whether you're looking to create cinematic text-to-video scenes, stylized image-driven sequences, or combine multiple references into a single shot โ these workflows are your starting point.
gguf node added
Go ahead and upload yours!
Your query returned no results โ please try removing some filters or trying a different term.