this is an image generation model based on training from Illustrious-xl, and continued trained by Laxhar Lab.
https://civitai.com/models/795765/illustrious-xl
It utilizes the latest full Danbooru and e621 datasets for training, with native tags caption.
The version uploaded on 8 October trained 5 epochs on 8*H100, as a Early Access Version.
And huggingface page of Lab
https://huggingface.co/Laxhar/sdxl_noob
Follow-up models and technical reports will be posted on huggingface
This version of the model improves the fit of the characters and styles in Illustrious-xl 0.1ver, and the specific characteristics of the characters have a better representation. Laxhar lab is currently continuing to train the new version of the open-source model of XL on the basis of this beta version in the hope of minimising the use of lora, and releasing a more Noob-friendly, one-click SDXL anime model!
Note: The model name and other details are subject to change.
-We are compelled to release an extremely premature version of this model against our wishes.
-The model is still actively in training and far from completion.
-This forced open-source version will be released under the same license terms as its base model,Illustrious-XL-v0.1.
This is an early test version intended for internal use. However, we are considering allowing limited external testing.
- Danbooru (Pid: 1~7,600,039):
https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel
- Danbooru (Pid > 7,600,039):
https://huggingface.co/datasets/deepghs/danbooru_newest-webp-4Mpixel
- E621 Data as of 2024-04-07 :
https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel
<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>
For quality tags, we evaluated image popularity through the following process:
Data normalization based on various sources and ratings.
Application of time-based decay coefficients according to date recency.
Ranking of images within the entire dataset based on this processing.
Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.
Percentile Range Quality Tags
> 95th masterpiece
> 85th, <= 95th best quality
> 60th, <= 85th good quality
> 30th, <= 60th normal quality
<= 30th worst quality
In the CCIP test, noobaiXL showed an improvement of approximately 2% compared to its base model. Based on data from over 3,500 characters, 89.2% of the characters achieved a CCIP score higher than 0.9. Given the current model performance, it is necessary to further expand the dataset for the existing CCIP test
Monetization Prohibition:
● You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages.
● As per the license, you must openly publish any derivative models and variants. This model is intended for open-source use, and all derivatives must follow the same principles.
This model is released under Fair-AI-Public-License-1.0-SD
Plz check this website for more information:
Freedom of Development (freedevproject.org)
(listed in no particular order)
L_A_X https://civitai.com/user/L_A_X
https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69
li_li https://civitai.com/user/li_li
nebulae https://civitai.com/user/kitarz
Chenkin https://civitai.com/user/Chenkin
Narugo1992:
Thanks to narugo1992 and the deepghs he leads for open-sourcing a range of training sets, image processing tools and models.
https://huggingface.co/deepghs
Naifu:
Training scripts
https://github.com/Mikubill/naifu
Onommai:
Thanks to onommai open source for such a powerful base model.
aria1th261 https://civitai.com/user/aria1th261
kblueleaf https://civitai.com/user/kblueleaf
Euge https://civitai.com/user/Euge_
Yidhar https://github.com/Yidhar
ageless 白玲可 Creeper KaerMorh 吟游诗人 SeASnAkE zwh20081 Wenaka~喵 稀里哗啦 幸运二副
昨日の約. 445
NoobAI-XL (NAI-XL) is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user L_A_X. Derived from the powerful Stable Diffusion (SDXL 1.0) model, NoobAI-XL (NAI-XL) has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that NoobAI-XL (NAI-XL) is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, base model, based.
With a rating of 0 and over 0 ratings, NoobAI-XL (NAI-XL) is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of NoobAI-XL (NAI-XL) from here.
To use NoobAI-XL (NAI-XL), download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like NoobAI-XL (NAI-XL), check out our crash course in AI image generation.
Merry Christmas! NOOBAI XL-VPred 1.0 has been released! The V prediction series has come to a successful end, and what an interesting journey it has been. Who knows, we might have the chance to do this again in the future. With this, our Laxhar Lab's weekly update plan comes to a grand finale!
By the way, here are the advantages of this version:
1. Fine-tuned with high-quality datasets: We have optimized the model for anatomical accuracy and compositional rationality through meticulous adjustments with high-quality datasets.
2. Flexible style combination weights: The model now offers more flexibility in combining different painting styles, with improved robustness when overlaying multiple styles.
3. Enhanced utility of quality words: The effectiveness of quality words has become more pronounced in this version.
4. A blend of features from the standard and S versions: The color style is vibrant yet less prone to overexposure, combining the best of both worlds.
Usage recommendations and future work:
1. Use with dynamic CFG plugin: We recommend using the dynamic CFG plugin when employing the V prediction model to prevent oversaturation or overly gray images. You can refer to the configuration of 0.2 for the best results.
2. Choice of sampling methods: Although NOOBAI XL-VPred 1.0 supports most sampling methods, V prediction does not support the Karras series of sampling. Therefore, we suggest using Euler and DDIM sampling methods for more stable outcomes.
3. Ongoing updates and support: We will continue to update VPred 1.0, including the ControlNet model and other plugins. The main model will also be updated irregularly when there are significant improvements (let's see how the DIT effect of NAI4 is and learn from it), so stay tuned!
Lastly, I must share a personal recommendation: the recent game "MiSide" is truly a great game. It has been a long time since a solo game has moved me so much. I strongly recommend it. Wishing you all a Merry Christmas. After a year of hard work, it's time to rest. Until we meet again in this world of possibilitiesミ(・・)ミ
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.