This is a merge of a couple of Illustrious models made with intention of creating realistic character generations without losing most of the innate Illustrious knowledge.
All the example images are made with the usage of Hiresfix, ADetailer or Kohya HRFix using only this checkpoint. Feel free to ask any questions about the exact usage.
Character knowledge is pretty decent, you can generate almost any character that base illustrious can if you specify some small design details.
Faces and skin look reasonably realistic for said character knowledge degree.
Pretty good lighting, you can generate actual night scenes or harsh shadows, without it looking like the whole image is lit by a spotlight.
Same-ish faces, so the model is not the best for generating generic realistic people. Faces will still change with the usage of quality tags, specifying a different ethnicity or with different characters.
Faces can look off-putting in some cases, like too many blemishes or too anime looking, it can be fixed though, read the usage guide.
Expressions are a bit cartoonish.
Hands are not the best in dynamic images.
DPM++ 2M SDE, Karras, 30-50 steps, CFG 5-6.
Most of my images are 720x1600 upscaled to 1080x2400 with HiresFix.
I’m almost always using ADetailer with 0.2-0.4 denoise and HiresFix with 0.3-0.4 denoise and 1.5-2 upscale without any dedicated upscaler.
For full body shots you can use Kohya HRFix with 0.35-0.5 end percent and 1.5 downscale factor. It allows to keep some very fine details, which can’t really be restored post initial generation in regular resolution without heavy inpainting.
Don't spam quality tags from 2D illustrious checkpoints you will just get 2.5D anime faces.
You can get by with simple:
realistic,detailed,detailed face,volumetric lightingIf you want the face to look a bit more pleasing:
cute,beautifulOther decent tags:
natural skin,realistic skin,beautiful eyes,detailed backgroundFor night scenes use:
(low light,dark,night)Some characters will look more like a 2.5D use this in NEGATIVE to fix this:
(anime,big eyes,anime eyes:1.2)If you want smooth skin without it looking like a cheap filter use this in NEGATIVE if necessary:
blemishes,moles,frecklesI highly recommend using Forge webui if you have any VRAM problems with higher resolutions because it has built-in Tiled VAE, which can allow you to generate up to 8k resolution pictures with the sacrifice of generation time even with 6gb-8gb VRAM. Use Never OOM and/or Kohya HRFix in txt2img.
https://github.com/lllyasviel/stable-diffusion-webui-forge/releases/tag/latest
For multiple people I recommend Forge Couple extension, it allows character features to not be mixed up, similar to Regional prompter. Maintaining a good lighting can get a little bit more tricky the smaller the image segment per character becomes.
https://github.com/Haoming02/sd-forge-couple
And obviously ADetailer
https://github.com/Bing-su/adetailer
Models used in merge:
Quintillus - "Reality https://civitai.com/models/1131256
Uncanny valley https://civitai.com/models/507472
LeafVice [ illustrious ] by Leaf https://civitai.com/models/1177564
RealVisXL V5.0 https://civitai.com/models/139562/realvisxl-v50
SpectralMix is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user diffusor1650. Derived from the powerful Stable Diffusion (Illustrious) model, SpectralMix has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that SpectralMix is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as photorealistic, base model, realistic.
With a rating of 0 and over 0 ratings, SpectralMix is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of SpectralMix from here.
To use SpectralMix, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like SpectralMix, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.