• RenderNet.ai • Sinkin.ai • Graydient • Arcanium.art • Gallerai.ai • RunDiffusion •
For commercial use, please be sure to contact me (Ko-fi) or by email: samuele[dot]bonzio[at]gmail[dot]com
https://ko-fi.com/samael1976/leaderboard
This is a long shot project, I’d like to implement something new at every update!
The name is a merge of the two words: Animation and Universe (and a word pun: Any+Universe -> Anyverse -> Aniverse)
-> If you are satisfied using my model, press on ❤️ to follow the progress and consider leaving me ⭐⭐⭐⭐⭐ on model review, it's really important to me!
Thank you in advance 🙇
And remember to publish your creations using this model! I’d really love to see what your imagination can do!
Excessive negative prompt can makes your creations worse, so follow my suggestions below!
Before applying a LoRA to produce your favorite character, try it without first. You might be surprised what this model can do!
I run my Home PC A1111 with this setting:
set COMMANDLINE_ARGS= --xformers
if you can't install xFormers (read below) use my Google Colab Setting:
set COMMANDLINE_ARGS= --disable-model-loading-ram-optimization --opt-sdp-no-mem-attention
My A1111 Version: v1.6.0-RC-28-ga0af2852 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2
If you want activate xformers optimization like my Home PC (How to install xFormers):
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "xformers"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you can't install xFormers use SDP-ATTENTION, like my Google Colab:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "sdp-no-mem - scaled dot product without memory efficient attention"
Press in "Apply Settings"
Reboot your Stable Diffusion
How to emulate the nvidia GPU follow this steps:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Show all pages"
Search "Random number generator source"
Select the voice: "NV"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you use my models, install the ADetailer extension for your A1111.
Navigate to the "Extensions" tab within Stable Diffusion.
Go to the "Install from URL" subsection.
Paste the following URL: https://github.com/Bing-su/adetailer
Click on the "Install" button to install the extension
Reboot your Stable Diffusion
VAE: VAE is included (but usually I still use the 840000 ema pruned)
Clip skip: 2
Upscaler: 4x-Ultrasharp or 4X NMKD Superscale
Sampling method: DPM++ 2M SDE Karras
Width: 576 (o 768)
Height: 1024
CFG Scale: 3 -> Steps: 15
CFG Scale: 4 -> Steps: 20
CFG Scale: 5 -> Steps: 25
CFG Scale: 6 -> Steps: 30
...and so on...
MY FAVORITE PROMPT:
(masterpiece, best quality, highres:1.2), (photorealistic:1.2), (intricate and beautiful:1.2), (detailed light:1.2), (colorful, dynamic angle), RAW photo, upper body shot, fashion photography, YOUR PROMPT, (highres textures), dynamic pose, bokeh, soft light passing through hair, (abstract background:1.3), (sharp), exposure blend, bokeh, (hdr:1.4), high contrast, (cinematic), (muted colors, dim colors, soothing tones:1.3), morbid
NEGATIVE PROMPT:
(worst quality, low quality), negative_hand-neg, bad-hands-5, naked, nude, braless, cross, sepia, black&white, B&W, painting, drawing, illustration
YOU CAN ALSO USE THESE NEGATIVE EMBEDDINGS:
5) For MEN images: girl, woman, female, tits, BadImage_v2-39000, negative_hand-neg, bad-hands 5
I don't use Hi.Res fix because:
1) in my computer don't work
2) my models don't need it. Use txt2image, aderailer and the suggested upscaler in the resources tab.
If you still want use it, this is the setting sent me by MarkWar (follow him to see his creations ❤️).
Hires upscale: 1.5
Hires steps: 20~30
Hires upscaler: R-ESRGAN 4x + Anime6B,
Denoising strength: 0.4
Adetailer: face_yolov8n
How to install and use adetailer: Click Here
When you see that I used Inpainting on my images, I only modify the face (Hires Fix on my old PC doesn't work and got stuck). This is my setting:
Click in the tab img2img, than click on inpaint ->
Paint the face (only the face, neck, ears...) and after that set:
Inpaint masked
Only masked
Only masked padding, pixels: 12
Sampling steps: 50
Set: Only masked
Batch Size: 8
in the Positive Prompt write: (ultra realistic, best quality, masterpiece, perfect face)
Than click on GENERATE
Thanx to: tejasbale01 - Spidey Ai Art Tutorial (follow him in youtube)
Animesh Full V1.5 + Controlnet | Prompt Guide |
Do you like my work?
If you want you can help me to buy a new PC for Stable Diffusion!
❤️ You can buy me a (Espresso... I'm italian) coffee or a beer ❤️
This is the list of hardware if you are courius: Amazon Wishlist
I must thank you Olivio Sarikas and SECourses for their video tutorials! (I'd really love to see a your video using my model ❤️ )
(*) MarkWar is authorized by me to do anything with my models.
(**) Why did I set such stringent rules? Because I'm tired of seeing sites like Pixai (and many others) that get rich on the backs of the model creators without giving anything in return.
(***) Low Rank Adaptation models (LoRAs) and Checkpoints created by me.
As per Creative ML OpenRAIL-M license section III, derivative content(i.e. LoRA, Checkpoints, mixes and other derivative content) is free to modify license for further distribution. In that case such is provided by licensing on each single model on Civitai.com. All models produced by me are prohibiting hosting, reposting, reuploading or otherwise utilisation of my models on other sites that provide generation service without a my explicit authorization.
(****)According to Italian law (I'm Italian):
The law on copyright (law 22 April 1941, n. 633, and subsequent amendments, most recently that provided for by the legislative decree of 16 October 2017 n.148) provides for the protection of "intellectual works of a creative nature", which belong to literature, music, figurative arts, architecture, theater and cinema, whatever their mode or form of expression.
Subsequent changes, linked to the evolution of new information technologies, have extended the scope of protection to photographic works, computer programs, databases and industrial design creations.
Copyright is acquired automatically when a work is defined as an intellectual creation.
Also valid for the US: https:// ufficiobrevetti.it/copyright/copyright-usa/
All my Stable Diffusion models in Civitai (as per my approval) are covered by copyright.
AniVerse is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Samael1976. Derived from the powerful Stable Diffusion (SD 1.5) model, AniVerse has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that AniVerse is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, landscapes, manga.
With a rating of 4.91 and over 1452 ratings, AniVerse is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of AniVerse from here.
To use AniVerse, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like AniVerse, check out our crash course in AI image generation.
Hi everyone, as you may have noticed it's a busy time in my little world.
I'm learning to train with XL (I don't know if you've seen that AniVerse XL came out), AniVerse Pony XL and now I want to learn how to create a model for Pixart Sigma, as long as my video card allows me.
For this reason my weekly Monday publications with a new model will probably be skipped.
This version of AniVerse is therefore just a merge, designed not to leave all those who use Stable Diffusion 1.5 "alone" (including me).
I particularly like the colors, details and dynamism. Too bad for the hands, which from my tests are not exactly the best.
Let me know what you think.
PS As soon as I have time I will write an article for those who have 3/4GB video cards and how to use XL in these video cards.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.