HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning
✨ Key Features
HuMo is a unified, human-centric video generation framework designed to produce high-quality, fine-grained, and controllable human videos from multimodal inputs—including text, images, and audio. It supports strong text prompt following, consistent subject preservation, synchronized audio-driven motion.
VideoGen from Text-Image - Customize character appearance, clothing, makeup, props, and scenes using text prompts combined with reference images.
VideoGen from Text-Audio - Generate audio-synchronized videos solely from text and audio inputs, removing the need for image references and enabling greater creative freedom.
VideoGen from Text-Image-Audio - Achieve the higher level of customization and control by combining text, image, and audio guidance.
Examples and models from the following sources reuploaded for your convenience here:
https://huggingface.co/bytedance-research/HuMo
https://github.com/Phantom-video/HuMo
Compatible with both 480P and 720P resolutions. 720P inference will achieve much better quality.
HuMo for Wan is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Cyph3r. Derived from the powerful Stable Diffusion (Wan Video 14B t2v) model, HuMo for Wan has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that HuMo for Wan is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as base model.
With a rating of 0 and over 0 ratings, HuMo for Wan is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of HuMo for Wan from here.
To use HuMo for Wan, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like HuMo for Wan, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.