Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
Live demo available on HuggingFace (CPU is slow but free).
Available on the following websites with GPU acceleration:
MY MODELS WILL ALWAYS BE FREE.
NOTES
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different. Stay tuned for V5!
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
After a lot of tests I'm finally releasing my mix. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Diffuser weights (courtesy of /u/Different-Bet-1686):
https://huggingface.co/Lykon/DreamShaper
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Suggested settings:
- I had CLIP skip 2 on some pics (all of them for version 4)
- I had ENSD: 31337 for basically all of them
- All of them had highres.fix or img2img at higher resolution.
- I don't use restore faces, as it washes out the painting effect
- Version 4 requires no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civitai.com/models/4219 (the girls with glasses or if it says wanostyle
)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style
)
-- https://civitai.com/models/4982/anime-screencap-style-lora (not used for any example but works great)
NOTE: if you find that the prompts below look "familiar" it's because I've taken them from other reviews and models here, to basically compare my model to other examples. Credits to the original authors. Thanks for the benchmark.
DreamShaper is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Lykon. Derived from the powerful Stable Diffusion (SD 1.5) model, DreamShaper has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that DreamShaper is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, landscapes, character.
With a rating of 4.97 and over 458 ratings, DreamShaper is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of DreamShaper from here.
To use DreamShaper, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like DreamShaper, check out our crash course in AI image generation.
Diffusers format of DS8