
Alternative use of ClipSkip 1 or 2
While this model may seem fine to some, it may be an unpleasant trough to some. The solution is to write at the prompt (Realistic:0.1~1.4) or(realistic:0.1~1) in the negative prompt.
Default prompt: best quality, masterpiece
default negative prompt: (low quality, worst quality:1.4)
Recommended: Sampler, eluer a, DPM++SDE Karras. Step 20, scale 6,(Modified. You can use a high scale and step. I'm not good at drawing pictures).
Apply VAE. kl-f8-anime2 or vae-ft-mse-840000-ema-pruned
clip skip 2
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
hires fix denoise 0.5, upscale by 2. Latent, R-ESRGAN 4×+ Anime6B.
If you don't upscale the hires fix you may not get the results you expect.
This model seems to have better hand batting average than other models, but it is my personal opinion. I want you to test it yourself. I don't recommend hitting the prompt too hard. Even if you don't use the hand prompt, it comes out okay 3 times out of 10 times.
The closer the person is, the more detailed it is. Upper body, cowboy shot, these prompts are also recommended.
I also checked the operation of Colab. It works very well..
other models
https://civitai.com/models/6437/anidosmix
https://civitai.com/models/8437/ddosmix
https://civitai.com/models/6925/realdosmix
DosMix is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user DiaryOfSta. Derived from the powerful Stable Diffusion (SD 1.5) model, DosMix has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that DosMix is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, character, 3d.
With a rating of 4.94 and over 106 ratings, DosMix is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of DosMix from here.
To use DosMix, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like DosMix, check out our crash course in AI image generation.
v1