Note: There are other Wan Video files hosted on Civitai - these may be duplicates, but this model card is primarily to host the files used by Wan Video in the Civitai Generator - watch this space!
π SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
π Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
π Multiple Tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
π Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
π Powerful Video VAE: Wan-VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
Original HuggingFace Repo: https://huggingface.co/Wan-AI
Wan Video is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user theally. Derived from the powerful Stable Diffusion (Wan Video) model, Wan Video has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that Wan Video is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as base model, video generation.
With a rating of 0 and over 0 ratings, Wan Video is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of Wan Video from here.
To use Wan Video, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion β it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like Wan Video, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results β please try removing some filters or trying a different term.