Trained on images without visible lineart and flat colors and little to no indication of depth.
This is another small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.
Recommended prompt structure:
Positive prompt:
flat color, no lineart,
{{tags}}
masterpiece, best quality, very awa, absurdres
Negative prompt:
(worst quality, low quality, sketch:1.1), error, bad anatomy, bad hands, watermark, ugly, distorted, censored, lowres, abstract, signature, bkub
[WAN 2.2 TI2V 5B] LoRA
Trained with diffusion-pipe on Wan2.2-TI2V-5B
Experimental - first test for Wan 2.2 training
Image dataset only
Less effect at longer framerates
Text to Video previews generated with ComfyUI_examples/wan22/#text-to-video
Loading the LoRA with LoraLoaderModelOnly node
dataset.toml
# Resolution settings.
resolutions = [1024]
# Aspect ratio bucketing settings
enable_ar_bucket = true
min_ar = 0.5
max_ar = 2.0
num_ar_buckets = 7
[[directory]] # IMAGES
# Path to the directory containing images and their corresponding caption files.
path = '/mnt/d/training_data/images'
num_repeats = 5
resolutions = [1024]
config.toml
# Dataset config file.
output_dir = '/mnt/d/wan/training_output'
dataset = 'dataset.toml'
# Training settings
epochs = 50
micro_batch_size_per_gpu = 1
pipeline_stages = 1
gradient_accumulation_steps = 4
gradient_clipping = 1.0
warmup_steps = 100
# blocks_to_swap=32
# eval settings
eval_every_n_epochs = 5
eval_before_first_step = true
eval_micro_batch_size_per_gpu = 1
eval_gradient_accumulation_steps = 1
# misc settings
save_every_n_epochs = 5
checkpoint_every_n_minutes = 30
activation_checkpointing = true
partition_method = 'parameters'
save_dtype = 'bfloat16'
caching_batch_size = 1
steps_per_print = 1
video_clip_mode = 'single_middle'
[model]
type = 'wan'
ckpt_path = '../Wan2.2-TI2V-5B'
dtype = 'bfloat16'
# You can use fp8 for the transformer when training LoRA.
transformer_dtype = 'float8'
timestep_sample_method = 'logit_normal'
[adapter]
type = 'lora'
rank = 32
dtype = 'bfloat16'
[optimizer]
type = 'adamw_optimi'
lr = 5e-5
betas = [0.9, 0.99]
weight_decay = 0.02
eps = 1e-8
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.