Trained on 2d illustrations composited on a photo background.
This is a small LoRA I thought would be interesting to see how models trained on illustrations or real world images/video can produce the composite, mixed reality effect.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.
Metadata is included in all uploaded files, you can drag the generated videos into ComfyUI to use the embedded workflows.
Recommended prompt structure:
Positive prompt (trigger at the end of prompt, before quality tags for non-hunyaun versions):
{{tags}}
real world location, photo background,
masterpiece, best quality, very awa, absurdres
Negative prompt:
(worst quality, low quality, sketch:1.1), error, bad anatomy, bad hands, watermark, ugly, distorted, censored, lowres
[WAN 14B] LoRA
Trained with diffusion-pipe on Wan2.1-T2V-14B with a dataset including image and video
37 images, 23 videos
Video previews generated with ComfyUI_examples/wan/#text-to-video
Loading the LoRA with LoraLoaderModelOnly node and using the fp8 14B wan2.1_t2v_14B_fp8_e4m3fn.safetensors
Image previews generated with modified ComfyUI_examples/wan/#text-to-video
Setting the frame length to 1
Adding Upscaling
Image to Video previews generated with ComfyUI_examples/wan/#image-to-video
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.