Hi! I intrduce my WAN 2.2 Models 10 steps
Download
FP8/FP16 + Workflows - https://huggingface.co/StefanFalkok/Wan_2.2_10steps/tree/main
GGUF + Workflows - https://huggingface.co/StefanFalkok/Wan_2.2_10steps_GGUF/tree/main (Only Q8_0 at the moment)
Everything u need - just load model without light lora, set 10 steps (5/5), set 2cfg on High Noise and cfg1 on low noise, set 81 frame (16 framerate), set resolution from 480p to 720p
I merged original WAN 2.2 Models from ComfyUI repository with LightX T2V Rank 256 bf 16 (https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank256_bf16.safetensors)
fp8 results on rtx 5080
480p - around 2.5 minutes
1024x576 - around 3.5-4 minutes
720p - around 7.5 minutes
Q8_0 and FP16 take a 25-30% more time to generate video, but you get more quality and stable result
If you need GGUF Q6, Q5, Q4 etc models - send me this in DM or leave comment
My TG Channel - https://t.me/StefanFalkokAI
My TG Chat - https://t.me/+y4R5JybDZcFjMjFi
Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models) is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Stefan_Falkok. Derived from the powerful Stable Diffusion (Wan Video 2.2 T2V-A14B) model, Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models) has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models) is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as base model, wan, speed.
With a rating of 0 and over 0 ratings, Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models) is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models) from here.
To use Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models), download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like Wan 2.2 10 Steps T2V & I2V (FP8 + GGUF models), check out our crash course in AI image generation.
Wan 2.2 T2V LowNoise 10 Steps Q8_0