Samael1976
over 1 year ago

⬇Read the info below to get the high quality images (click on show more)⬇

>>> UPLOADING/SHARING MY MODELS OUTSIDE CIVITAI IS STRICLY PROHIBITED* <<<

The only authorized generative service website are:

Mage.Space = That's help me a lot to buy a new PC

RenderNet.ai Sinkin.ai Graydient Arcanium.art Gallerai.ai RunDiffusion

This model is free for personal use and free for personal merging(*).

For commercial use, please be sure to contact me (Ko-fi) or by email: samuele[dot]bonzio[at]gmail[dot]com


Leaderboard of my 3 best supporter:

https://ko-fi.com/samael1976/leaderboard


Aniverse - is just the beginning!

This is a long shot project, I’d like to implement something new at every update!

The name is a merge of the two words: Animation and Universe (and a word pun: Any+Universe -> Anyverse -> Aniverse)


-> If you are satisfied using my model, press on ❤️ to follow the progress and consider leaving me ⭐⭐⭐⭐⭐ on model review, it's really important to me!

Thank you in advance 🙇

And remember to publish your creations using this model! I’d really love to see what your imagination can do!


Recommended Settings:

  • Excessive negative prompt can makes your creations worse, so follow my suggestions below!

  • Before applying a LoRA to produce your favorite character, try it without first. You might be surprised what this model can do!


A1111 my settings:

I run my Home PC A1111 with this setting:

  • set COMMANDLINE_ARGS= --xformers

if you can't install xFormers (read below) use my Google Colab Setting:

  • set COMMANDLINE_ARGS= --disable-model-loading-ram-optimization --opt-sdp-no-mem-attention

My A1111 Version: v1.6.0-RC-28-ga0af2852  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2

If you want activate xformers optimization like my Home PC (How to install xFormers):

  • In A1111 click in "Setting Tab"

  • In the left coloumn, click in "Optimization"

  • in: "Cross attention optimization" select: "xformers"

  • Press in "Apply Settings"

  • Reboot your Stable Diffusion

If you can't install xFormers use SDP-ATTENTION, like my Google Colab:

  • In A1111 click in "Setting Tab"

  • In the left coloumn, click in "Optimization"

  • in: "Cross attention optimization" select: "sdp-no-mem - scaled dot product without memory efficient attention"

  • Press in "Apply Settings"

  • Reboot your Stable Diffusion

How to emulate the nvidia GPU follow this steps:

  • In A1111 click in "Setting Tab"

  • In the left coloumn, click in "Show all pages"

  • Search "Random number generator source"

  • Select the voice: "NV"

  • Press in "Apply Settings"

  • Reboot your Stable Diffusion

If you use my models, install the ADetailer extension for your A1111.

Navigate to the "Extensions" tab within Stable Diffusion.

  • Go to the "Install from URL" subsection.

  • Paste the following URL: https://github.com/Bing-su/adetailer

  • Click on the "Install" button to install the extension

  • Reboot your Stable Diffusion



  • Sampling method: DPM++ 2M SDE Karras

  • Width: 576 (o 768)

  • Height: 1024

  • CFG Scale: 3 -> Steps: 15
    CFG Scale: 4 -> Steps: 20
    CFG Scale: 5 -> Steps: 25
    CFG Scale: 6 -> Steps: 30

    ...and so on...


MY FAVORITE PROMPT:

  • (masterpiece, best quality, highres:1.2), (photorealistic:1.2), (intricate and beautiful:1.2), (detailed light:1.2), (colorful, dynamic angle), RAW photo, upper body shot, fashion photography, YOUR PROMPT, (highres textures), dynamic pose, bokeh, soft light passing through hair, (abstract background:1.3), (sharp), exposure blend, bokeh, (hdr:1.4), high contrast, (cinematic), (muted colors, dim colors, soothing tones:1.3), morbid


    NEGATIVE PROMPT:

  • (worst quality, low quality), negative_hand-neg, bad-hands-5, naked, nude, braless, cross, sepia, black&white, B&W, painting, drawing, illustration


YOU CAN ALSO USE THESE NEGATIVE EMBEDDINGS:


HiRes.Fix Setting:

I don't use Hi.Res fix because:

1) in my computer don't work

2) my models don't need it. Use txt2image, aderailer and the suggested upscaler in the resources tab.

If you still want use it, this is the setting sent me by MarkWar (follow him to see his creations ❤️).

Hires upscale: 1.5

Hires steps: 20~30

Hires upscaler: R-ESRGAN 4x + Anime6B,

Denoising strength: 0.4

Adetailer: face_yolov8n

How to install and use adetailer: Click Here


Inpainting Setting:

When you see that I used Inpainting on my images, I only modify the face (Hires Fix on my old PC doesn't work and got stuck). This is my setting:

  • Click in the tab img2img, than click on inpaint ->

  • Paint the face (only the face, neck, ears...) and after that set:

  • Inpaint masked

  • Only masked

  • Only masked padding, pixels: 12

  • Sampling steps: 50

  • Set: Only masked

  • Batch Size: 8
    in the Positive Prompt write: (ultra realistic, best quality, masterpiece, perfect face)

  • Than click on GENERATE


ControlNet & Prompt guide video tutorial:

Thanx to: tejasbale01 - Spidey Ai Art Tutorial (follow him in youtube)

Animesh Full V1.5 + Controlnet | Prompt Guide |


Do you like my work?

If you want you can help me to buy a new PC for Stable Diffusion!
❤️ You can buy me a (Espresso... I'm italian) coffee or a beer ❤️

This is the list of hardware if you are courius: Amazon Wishlist


I must thank you Olivio Sarikas and SECourses for their video tutorials! (I'd really love to see a your video using my model ❤️ )


You are solely responsible for any legal liability resulting from unethical use of this model

  • (*) MarkWar is authorized by me to do anything with my models.

  • (**) Why did I set such stringent rules? Because I'm tired of seeing sites like Pixai (and many others) that get rich on the backs of the model creators without giving anything in return.

  • (***) Low Rank Adaptation models (LoRAs) and Checkpoints created by me.

    As per Creative ML OpenRAIL-M license section III, derivative content(i.e. LoRA, Checkpoints, mixes and other derivative content) is free to modify license for further distribution. In that case such is provided by licensing on each single model on Civitai.com. All models produced by me are prohibiting hosting, reposting, reuploading or otherwise utilisation of my models on other sites that provide generation service without a my explicit authorization.

  • (****)According to Italian law (I'm Italian):

    The law on copyright (law 22 April 1941, n. 633, and subsequent amendments, most recently that provided for by the legislative decree of 16 October 2017 n.148) provides for the protection of "intellectual works of a creative nature", which belong to literature, music, figurative arts, architecture, theater and cinema, whatever their mode or form of expression.

    Subsequent changes, linked to the evolution of new information technologies, have extended the scope of protection to photographic works, computer programs, databases and industrial design creations.

    Copyright is acquired automatically when a work is defined as an intellectual creation.

    Also valid for the US: https:// ufficiobrevetti.it/copyright/copyright-usa/

    All my Stable Diffusion models in Civitai (as per my approval) are covered by copyright.

Read more...

What is AniVerse?

AniVerse is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Samael1976. Derived from the powerful Stable Diffusion (SD 1.5) model, AniVerse has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that AniVerse is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, landscapes, manga.

With a rating of 4.91 and over 1452 ratings, AniVerse is a popular choice among users for generating high-quality images from text prompts.

Can I download AniVerse?

Yes! You can download the latest version of AniVerse from here.

How to use AniVerse?

To use AniVerse, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like AniVerse, check out our crash course in AI image generation.

Download (1.94 GB) Download available on desktop only
You'll need to use a program like A1111 to run this – learn how in our crash course

Popularity

110k 10k

Info

Base model: SD 1.5

Version V2.0 (HD) - Pruned: 2 Files

To download these files, please visit this page from a desktop computer.

About this version: V2.0 (HD) - Pruned

PRUNED VERSION

This AniVerse version was literally the hardest model that I've made, but I also think it's also the best version out of all the ones released.

I have a lot of things to write, but I'll start by giving you the details for your creations, immediately after a short parenthesis.

Why the wording HD and 2.0? I imagine you're wondering.

HD: Because this version of AniVerse is trained (almost) entirely at 1024X1024px.

2.0: Because there is a slight shift towards an increase in realism compared to previous versions

////////////////////////////////////////////////////////////////////////////////////////////////////

This version gives you more scope for use.

You can use it as usual, to create 2.5D images in a semi-realistic, cartoonish or more realistic style.

It all depends on the positive prompt and negative prompt you will use.

You can find examples of prompts that I used.

In summary I can tell you that:

1) Bad-Images-39000 = More cartoonish images

2) Easy Negative and FastNegativeV2 = Usual Aniverse 2.5D style images

3) ng_deepnegative_v1_75t and EasyNegativeV2 = Images like the classic Aniverse, with a pinch of greater realism (not much)

////////////////////////////////////////////////////////////////////////////////////////////////////

But what does training at 1024px mean?

You can push your creations, with some peace of mind, to 1024x576 (or 1024x768) resolution, without using HiRes.fix. I went even further, up to 1280x720 with a margin of error of approximately ~5% (if you follow my advice):

  • Sampling method: DPM++ 2M SDE Karras

  • Width: 576 (o 768)

  • Height: 1024

  • CFG Scale: 3 -> Steps: 15
    CFG Scale: 4 -> Steps: 20
    CFG Scale: 5 -> Steps: 25
    CFG Scale: 6 -> Steps: 30

    ...and so on...

MY FAVORITE PROMPT:

  • (masterpiece, best quality, highres:1.2), (photorealistic:1.2), (intricate and beautiful:1.2), (detailed light:1.2), (colorful, dynamic angle), RAW photo, upper body shot, fashion photography, YOUR PROMPT, (highres textures), dynamic pose, bokeh, soft light passing through hair, (abstract background:1.3), (sharp), exposure blend, bokeh, (hdr:1.4), high contrast, (cinematic), (muted colors, dim colors, soothing tones:1.3), morbid

    NEGATIVE PROMPT:

  • (worst quality, low quality), negative_hand-neg, bad-hands-5, naked, nude, braless, cross, sepia, black&white, B&W, painting, drawing, illustration

YOU CAN ALSO USE THESE NEGATIVE EMBEDDINGS:

I created over 1000 images to be sure and when I talk about an error rate of around 5% I naturally mean the classic elongated or double bodies due to too high resolution.

Sometimes three arms or three legs may appear, but it is another type of error, not related to resolution, so I excluded it from the error calculation.

If you want to avoid this kind of error, just go down to a resolution of 1024X576px, to return to a sort of safe zone.

The hands!!! I finally managed to improve the hands!!!

I said improve, but they are not yet perfect and I am very happy with the details that have been achieved in the creations.

There are other things that I would like to improve in the next version (or rather I'm already working to eliminate any cuts/artifacts that sometimes happen on the face)

I would also like to improve the angle of the shadows and other small things (like the teeth).

But I can say I am absolutely satisfied with this version.

Now a little chitchat to tell you how sweaty this version was.

The PC with which I'm training, every so often (I'm still trying to define the cause) decides to crash, so I have to turn it off and start training again (and sometimes I'm away from home and lose entire days).

The training lasted approximately 490 hours, which is equivalent to approximately 21 continuous days of training.

To find the fix for the hands, after having the final model, I worked on it for another 2 weeks.

I repeat, it was hard, long and difficult, more than a few times I thought about throwing everything away, but in the end, thanks also to a bit of luck, I managed to create this version of AniVerse which is truly satisfying me.

I hope it gives you the same satisfaction it is giving me.

We'll talk again soon... with a new project (which has already been in progress for a while, but at this time of writing, I haven't the faintest idea if it will lead to the desired results).

Samuel

////////////////////////////////////////////////////////////////////////////////////////////////////

Questa versione di AniVerse è stata letteralmente il modello più difficile da fare, ma penso anche che sia anche la versione migliore tra tutti quelli pubblicati.

Ho un sacco di cose da scrivere, ma inizio con il darvi subito i dettagli per le vostre creazioni, subito dopo una breve parentesi.

Perchè la dicitura HD e 2.0? Immagino che ve lo starete chiedendo.

HD: Perchè questa versione di AniVerse è allenata (quasi) totalmente a 1024X1024px.

2.0: Perchè c'è un leggero stacco verso un aumento del realismo rispetto alle versioni precedenti.

////////////////////////////////////////////////////////////////////////////////////////////////////

Questa versione vi da più margine di utilizzo.

Potete utilizzarla come al solito, per creare immagini 2.5D in stile semirealistico, cartoonistico oppure più realistico.

Tutto dipende dal prompt positivo e dal prompt negativo che userete.

Potete trovare gli esempi di prompt che io ho utilizzato.

In sintesi vi posso dire che:

1) Bad-Images-39000 = Immagini più cartonistiche

2) Easy Negative e FastNegativeV2 = Immagini solito stile di Aniverse 2.5D

3) ng_deepnegative_v1_75t e EasyNegativeV2 = Immagini come il classico Aniverse, con un pizzico di realismo maggiore (poco)

////////////////////////////////////////////////////////////////////////////////////////////////////

Ma il training a 1024px cosa comporta?

Beh, che potete spingere le vostre creazioni, senza HiRes.fix, con una certa tranquillità, fino a 1024X576px o 1024X720px. Io mi sono spinto anche oltre, fino a 1280X720px (e viceversa) con un error rate di circa il ~5% (se seguirete i miei consigli):

  • Sampling method: DPM++ 2M SDE Karras

  • Width: 576 (o 768)

  • Height: 1024

  • CFG Scale: 3 -> Steps: 15
    CFG Scale: 4 -> Steps: 20
    CFG Scale: 5 -> Steps: 25
    CFG Scale: 6 -> Steps: 30

    ...and so on...

MY FAVORITE PROMPT:

  • (masterpiece, best quality, highres:1.2), (photorealistic:1.2), (intricate and beautiful:1.2), (detailed light:1.2), (colorful, dynamic angle), RAW photo, upper body shot, fashion photography, YOUR PROMPT, (highres textures), dynamic pose, bokeh, soft light passing through hair, (abstract background:1.3), (sharp), exposure blend, bokeh, (hdr:1.4), high contrast, (cinematic), (muted colors, dim colors, soothing tones:1.3), morbid

    NEGATIVE PROMPT:

  • (worst quality, low quality), negative_hand-neg, bad-hands-5, naked, nude, braless, cross, sepia, black&white, B&W, painting, drawing, illustration

YOU CAN ALSO USE THESE NEGATIVE EMBEDDINGS:

Ho creato più di 1000 immagini per essere sicuro e quando parlo di un errore rate di circa il 5% naturalmente intendo i classici corpi allungati o con doppio corpo dovuti ad una risoluzione troppo elevata.

A volte possono comparire tre braccia o tre gambe, ma è un altro tipo di errore, non legato alla risoluzione, perciò l'ho escluso dal calcolo dell'errore.

Se volete evitare questo genere di errore, basta scendere ad una risoluzione 1024X576px, per rientrare in sorta di zona sicura.

Le mani!!! Finalmente sono riuscito a migliorare le mani!!!

Ho detto migliorare, ma non sono ancora perfette e sono contentissimo dei dettagli che hanno raggiunto le creazioni.

Ci sono altre cose che vorrei migliorare nella prossima versione (o meglio sto già lavorando per eliminare eventuali tagli/artefetti che a volte capitano in faccia)

Vorrei anche migliorare l'angolazione delle ombre e altre piccole cose (come i denti).

Però posso dirmi assolutamente soddisfatto di questa versione.

Ora un po' di chitchat per dirvi quanto è stata sudata questa versione.

Il PC con il quale sto trainando, ogni tot tempo (sto cercando ancora di definire la causa) decide di impiantarsi, perciò mi tocca spegnere e ripartire con il training (e a volte sono fuori casa e perdo giornate intere).

Il traning è durato circa 490 ore, che equivalgono a circa 21 giorni continuativi di training.

Per trovare il fix delle mani, dopo aver avuto il modello finale, c'ho lavorato per ulteriori 2 settimane.

Ripeto, è stata dura, lunga e difficile, più di qualche volta ho pensato di buttare via tutto, ma alla fine, grazie anche ad un po' di fortuna, sono riuscito a creare questa versione di AniVerse che mi sta veramente soddisfacendo.

Spero che vi dia le stesse soddisfazioni che sta dando a me.

Ci risentiamo presto... con un nuovo progetto (che è già in corso d'opera da un po', ma in questo momento in cui scrivo, non ho la più pallida idea se porterà ai frutti desiderati).

Samuele

22 Versions

😥 There are no AniVerse V2.0 (HD) - Pruned prompts yet!

Go ahead and upload yours!

No results

Your query returned no results – please try removing some filters or trying a different term.