Training data is a subset of all my manally rated datasets with the quality/aesthetic modifiers, including only the masterpiece tagged images.
Recommended prompt structure:
(remove score tags for illustrious)
Positive prompt:
{{tags}}
score_9, score_8_up, score_7_up, score_6_up, absurdres, masterpiece, best quality, very aestheticNegative prompt:
(worst quality, low quality:1.1), score_4, score_3, score_2, score_1, error, bad anatomy, bad hands, watermark, ugly, distorted, signature
[WAN 14B] LoRA (experimental)
Trained with diffusion-pipe on Wan2.1-T2V-14B with the same (image-only) dataset as v2.3 [noobai v-pred]
Currently curating a video dataset
Video previews generated with ComfyUI_examples/wan/#text-to-video
Loading the LoRA with LoraLoaderModelOnly node and using the fp8 14B: wan2.1_t2v_14B_fp8_e4m3fn.safetensors
Higher quality previews use the full fp16 14b: wan2.1_t2v_14B_fp16.safetensors
Recommend following prompting guide for movement to avoid still images/jitter: https://www.comfyonline.app/blog/wan2-1-prompt-guide
Image previews generated with modified ComfyUI_examples/wan/#text-to-video
Setting the frame length to 1
Adding Upscaling
Better results with text-to-image than text-to-video for this version (due to training on images only)
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.