**Don't forget to Like 👍 the model. ;)
*Just added a version without auto image resize due to the high amount of people having errors with it. The manual one will work 100%. Sorry about that :)
Straightforward, this is an Image-to-Video workflow using the resources we have today (January 2025) with Hunyuan models. Using I2V LeapFusion Lora plus IP2V encoding, it can be very consistent and, in my opinion, as good as an older Kling version in terms of consistency. It’s not perfect, but it delivers solid results if used well, especially with videos of humans.
I kept it as simple as possible and didn’t include the faceswap node this time, but it’s a great addition if you’re planning to generate videos with human subjects. The VRAM usage depends heavily on the length and dimensions of the video you want to generate, but 12GB of VRAM is ideal to get good results.
As always, instructions and links are included in the workflow. Don’t forget to update Comfy and HunyuanVideoWrapper nodes!
That’s it. Leave a like and have fun!
Jan, 2025- First Release.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.