motimalu
4 months ago

Kirazuri Lazuli (Noobai V-Pred)

This checkpoint is a personal project trained locally from NoobAI-XL (NAI-XL) V-Pred 1.0-Version on a 4090 on a small dataset of 14,065 total images.

It focuses on adding additional knowledge since the data cutoff of the base model (2024/10/24), including styles, concepts, and characters from anime, video games, and virtual youtubers.

Usage

Your preferred generation settings used for NoobAI-XL (NAI-XL) V-Pred 1.0-Version should be mostly transferrable.

Previews are generated with a ComfyUI workflow using DynamicThresholdingFull, Upscaling, and FaceDetailer.

DynamicThresholding (CFG-Fix) settings used with a CFG of 10:

dynthres_enabled: True, dynthres_mimic_scale: 7, dynthres_threshold_percentile: 1, dynthres_mimic_mode: Half Cosine Down, dynthres_mimic_scale_min: 1, dynthres_cfg_mode: Half Cosine Down, dynthres_cfg_scale_min: 3, dynthres_sched_val: 1, dynthres_separate_feature_channels: enable, dynthres_scaling_startpoint: ZERO, dynthres_variability_measure: STD, dynthres_interpolate_phi: 1

For samplers, recommendation is Euler for generation, Euler Ancestral for upscaling/inpainting.

reForge or Forge should also be usable as resolved from version 1.0 (apologies if you ran into issues with that version).

*To be automatically detected as a v-pred model in Forge/reForge, added znstr and v_pred keys to the state dict of the model using this script.

Quality modifiers masterpiece, best quality, very aesthetic should be positioned at the end of the prompt.

Artist names can be prefixed with artist: to prevent token bleeding with artist names and concepts.

A1111 schedule prompting syntax is used in ComfyUI through the comfyui-prompt-control extension to combine artist styles, i.e: artist:[artist1|artist2|artist3]

In some cases Regional Prompting with Attention Couple is also used (example).

Positive prompt:

{{characters}}, {{copywrites}}, {{artists}},
{{tags}},
absurdres, masterpiece, best quality, very aesthetic

Training details

The kohya-ss/sd-scripts training configs used can be found on github.

v2.0

This version now has a much better representation of all the characters, concepts, and styles I hoped to train for this checkpoint.

Single training run on the full dataset, expanded with more recent data:

  • Training images: 14,065

  • Regularization images: 7056 (Generated from NoobAI-XL (NAI-XL) V-Pred 1.0-Version)

  • Optimizer: Adafactor

  • Training precision: Full-fat fp32

  • Batch size: 4

  • U-Net LR: 6e-6

  • TE LR: 2e-6

  • Epochs: 50

  • Steps: 352,600 (~344 GPU hours at 3.52s/it)

v.1.1

Iterative checkpoint training approach inspired by PixelWave.

This involved training in dataset batches of ~1200 images, for 10 training sessions, before finishing with an 11th aesthetic finetune dataset of 267 images.

  • Adafactor optimizer

  • Full-fat fp32 training precision

  • Batch size and LR were adjusted multiple times

    • Batch size 4, LR 6e-6 seemed most stable

  • TE trained for the 10th and 11th training sessions at Batch size 4, LR 2e-6

  • Regularization dataset generated from the 10th checkpoint used in the final aesthetic training to preserve the previously learned characters

⏳ Will share more previews for the trained concepts below as I have time to test them.

List of new series/characters trained:

anime:

  • dandadan

  • girumasu

  • gundam gquuuuuux

  • solo leveling

  • witch watch

  • kusuriya no hitorigoto

video-games:

  • elden ring nightreign

  • metaphor: refantazio

  • monster hunter wilds

  • fate/go (lilith)

  • genshin impact (citlali, escoffier, lan-yan, varesa, xilonen, yumemizuki mizuki)

  • honkai star rail (aglaea, castorice, cipher)

  • wuthering waves (carlotta, cartethyia, chisa, ciaccona, zani)

  • zenless zone zero (astra-zao, cipher, ju-fufu, luciana de montefio, pulchra fellini, sweety, trigger, vivian-banshee, yi xuan)

hololive:

  • flow glow (isaki riona, kikirara vivi, koganei niko, mizumiya su, rindo chihaya)

  • hoshimachi suisei (11th, caramel-pain, kireigoto, spectra-of-nova, supernova)

  • himemori luna (7th)

  • houshou marine (ahoy pirates)

  • natsuiro matsuri (jersey maid)

  • nekomata okayu (personya respect)

  • ookami mio (8th)

  • oozora subaru (police)

  • roboco san (oriental)

  • shirakami fubuki (fbkingdom)

  • usada-pekora (10th)

indie v-tubers:

  • amagai ruka

  • dooby

  • nimi nightmate

  • yuuki sakuna

other:

  • myaku-myaku (expo2025)

List of concepts trained:

clothing:

  • ancient greek clothes

  • chronopattern dress

  • jirai kei

  • water dress

concepts:

  • fourth wall

  • star trail

  • flower field

  • mechabare

  • monster girl

  • year of the snake

Some intentionally tagged/curated style triggers, from 103 artist datasets:

  • blending

  • flat color

  • no lineart

  • impasto

  • painterly

  • chiaroscuro

  • impressionism

  • ink wash painting

  • pastel colors

  • pencil art

  • neon palette

  • dark

  • colorful

Traditional media group tags are also trained:

(some not supported by enough data)

  • traditional media

  • acrylic paint \(medium\)

  • airbrush \(medium\)

  • ballpoint_pen \(medium\)

  • brush \(medium\)

  • chalk \(medium\)

  • calligraphy_brush \(medium\)

  • canvas \(medium\)

  • charcoal \(medium\)

  • colored_pencil \(medium\)

  • color ink \(medium\)

  • coupy pencil \(medium\)

  • crayon \(medium\)

  • gouache \(medium\)

  • graphite \(medium\)

  • ink \(medium\)

  • marker \(medium\)

  • millipen \(medium\)

  • nib pen \(medium\)

  • oil painting \(medium\)

  • painting \(medium\)

  • pastel \(medium\)

  • photo \(medium\)

  • tempera \(medium\)

  • watercolor \(medium\)

Recognitions

Thanks to Laxhar Lab for the NoobAI-XL (NAI-XL) V-Pred 1.0-Version base model.

Thanks to narugo1992 and the deepghs team for open-sourcing various training sets, image processing tools, and models.

Thanks to kohya-ss for the sd-scripts trainer.

License

No modifications are made to the base model Noobai License, which is as follows:


This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.

I. Usage Restrictions

  • Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.

  • Prohibited generation of unethical or offensive content.

  • Prohibited violation of laws and regulations in the user's jurisdiction.

II. Commercial Prohibition

We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.

III. Open Source Community

To foster a thriving open-source community, users MUST comply with the following requirements:

  • Open source derivative models, merged models, LoRAs, and products based on the above models.

  • Share work details such as synthesis formulas, prompts, and workflows.

  • Follow the fair-ai-public-license to ensure derivative works remain open source.

IV. Disclaimer

Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.

Read more...

What is Kirazuri Lazuli (Noobai V-Pred)?

Kirazuri Lazuli (Noobai V-Pred) is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user motimalu. Derived from the powerful Stable Diffusion (NoobAI) model, Kirazuri Lazuli (Noobai V-Pred) has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that Kirazuri Lazuli (Noobai V-Pred) is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, base model, illustration.

With a rating of 0 and over 0 ratings, Kirazuri Lazuli (Noobai V-Pred) is a popular choice among users for generating high-quality images from text prompts.

Can I download Kirazuri Lazuli (Noobai V-Pred)?

Yes! You can download the latest version of Kirazuri Lazuli (Noobai V-Pred) from here.

How to use Kirazuri Lazuli (Noobai V-Pred)?

To use Kirazuri Lazuli (Noobai V-Pred), download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like Kirazuri Lazuli (Noobai V-Pred), check out our crash course in AI image generation.

Download (6.31 GB) Download available on desktop only
You'll need to use a program like A1111 to run this – learn how in our crash course

Popularity

340 ~10

Info

Base model: NoobAI

Latest version (v1.1 [noobai-v-pred-1]): 1 File

To download these files, please visit this page from a desktop computer.

About this version: v1.1 [noobai-v-pred-1]

Trained on NoobAI-XL (NAI-XL) V-Pred 1.0-Version

Dataset cutoff: 2025/05/25

reForge or Forge should also be usable as resolved from version 1.0 (apologies if you ran into issues with that version).

*To be automatically detected as a v-pred model in Forge/reForge, znstr and v_pred keys are added to the state dict of the model using this script.

2 Versions

😥 There are no Kirazuri Lazuli (Noobai V-Pred) v1.1 [noobai-v-pred-1] prompts yet!

Go ahead and upload yours!

No results

Your query returned no results – please try removing some filters or trying a different term.