this is an image generation model based on training from Illustrious-xl, and continued trained by Laxhar Lab.
https://civitai.com/models/795765/illustrious-xl
It utilizes the latest full Danbooru and e621 datasets for training, with native tags caption.
The version uploaded on 8 October trained 5 epochs on 8*H100, as a Early Access Version.
And huggingface page of Lab
https://huggingface.co/Laxhar/sdxl_noob
Follow-up models and technical reports will be posted on huggingface
This version of the model improves the fit of the characters and styles in Illustrious-xl 0.1ver, and the specific characteristics of the characters have a better representation. Laxhar lab is currently continuing to train the new version of the open-source model of XL on the basis of this beta version in the hope of minimising the use of lora, and releasing a more Noob-friendly, one-click SDXL anime model!
Note: The model name and other details are subject to change.
-We are compelled to release an extremely premature version of this model against our wishes.
-The model is still actively in training and far from completion.
-This forced open-source version will be released under the same license terms as its base model,Illustrious-XL-v0.1.
This is an early test version intended for internal use. However, we are considering allowing limited external testing.
- Danbooru (Pid: 1~7,600,039):
https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel
- Danbooru (Pid > 7,600,039):
https://huggingface.co/datasets/deepghs/danbooru_newest-webp-4Mpixel
- E621 Data as of 2024-04-07 :
https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel
<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>
For quality tags, we evaluated image popularity through the following process:
Data normalization based on various sources and ratings.
Application of time-based decay coefficients according to date recency.
Ranking of images within the entire dataset based on this processing.
Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.
Percentile Range Quality Tags
> 95th masterpiece
> 85th, <= 95th best quality
> 60th, <= 85th good quality
> 30th, <= 60th normal quality
<= 30th worst quality
In the CCIP test, noobaiXL showed an improvement of approximately 2% compared to its base model. Based on data from over 3,500 characters, 89.2% of the characters achieved a CCIP score higher than 0.9. Given the current model performance, it is necessary to further expand the dataset for the existing CCIP test
Monetization Prohibition:
● You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages.
● As per the license, you must openly publish any derivative models and variants. This model is intended for open-source use, and all derivatives must follow the same principles.
This model is released under Fair-AI-Public-License-1.0-SD
Plz check this website for more information:
Freedom of Development (freedevproject.org)
(listed in no particular order)
L_A_X https://civitai.com/user/L_A_X
https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69
li_li https://civitai.com/user/li_li
nebulae https://civitai.com/user/kitarz
Chenkin https://civitai.com/user/Chenkin
Narugo1992:
Thanks to narugo1992 and the deepghs he leads for open-sourcing a range of training sets, image processing tools and models.
https://huggingface.co/deepghs
Naifu:
Training scripts
https://github.com/Mikubill/naifu
Onommai:
Thanks to onommai open source for such a powerful base model.
aria1th261 https://civitai.com/user/aria1th261
kblueleaf https://civitai.com/user/kblueleaf
Euge https://civitai.com/user/Euge_
Yidhar https://github.com/Yidhar
ageless 白玲可 Creeper KaerMorh 吟游诗人 SeASnAkE zwh20081 Wenaka~喵 稀里哗啦 幸运二副
昨日の約. 445
NoobAI-XL (NAI-XL) is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user L_A_X. Derived from the powerful Stable Diffusion (SDXL 1.0) model, NoobAI-XL (NAI-XL) has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that NoobAI-XL (NAI-XL) is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, base model, based.
With a rating of 0 and over 0 ratings, NoobAI-XL (NAI-XL) is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of NoobAI-XL (NAI-XL) from here.
To use NoobAI-XL (NAI-XL), download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like NoobAI-XL (NAI-XL), check out our crash course in AI image generation.
NoobAI XL (V-pred branch)
NoobAI XL (V预测分支)
This model page is the V-pred branch of NoobAI XL, trained with the EA version followed by the 4ep version, which cannot be used in AUTOMATIC1111 WebUI. Please use it via diffusers or reForge.
This test was mainly conducted by @Euge_, thanks to his hard work as a member of Laxhar Lab.
该模型页面为 NoobAI XL 的 V 预测分支,使用Early Access Ver加训4ep的版本训练而成,无法在 AUTOMATIC1111 WebUI 中使用。 请通过 diffusers 或 reForge 使用。本测试由@尤吉主要进行,感谢尤吉作为Laxhar Lab成员的辛勤付出ミ(・・)ミ
Usage: reForge
Install and launch reForge, and choose branch;
git checkout dev_upstream_experimental
Find “Advanced Model Sampling for Forge” at the bottom of the page;
Enable “Enable Advanced Model Sampling”;
Select “v_prediction” in “Discrete Sampling Type”.
用法:reForge
安装并启动 reForge,并使用命令切换分支;
git checkout dev_upstream_experimental
在页面下方找到 “Advanced Model Sampling for Forge”;
启用 “Enable Advanced Model Sampling”;
在 “Discrete Sampling Type” 中选择 “v_prediction”。
Usage: Diffusers
用法:Diffusers
import torch
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
ckpt_path = "/path/to/model.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(
ckpt_path,
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler.register_to_config(
prediction_type="v_prediction",
rescale_betas_zero_snr=True,
)
pipe.enable_xformers_memory_efficient_attention()
pipe = pipe.to("cuda")
prompt = "best quality, 1boy, solo"
negative_prompt = "bad hands, worst quality, low quality, bad quality, multiple views, 4koma, comic, jpeg artifacts, monochrome, sepia, greyscale, flat color, pale color, muted color, low contrast, bad anatomy, picture frame, english text, signature, watermark, logo, patreon username, web address, artist name"
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=832,
height=1216,
num_inference_steps=28,
guidance_scale=7.0,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save('image.png')
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.