I really liked AOM3A3, but had some issues when I tested it out with my prompts, so I ended up creating a big multi model block weighted merge until I felt like I had something satisfactory, and then merged the upper 18 layers of that into AOM3A3. It will usually hit close to the composition of a scene from AOM3A3 if given the same prompt and seed, but not necessarily the appearance. FOR ME my mix does anatomy better. No guarantees the same will be true for you. Ultimately it has similar capabilities to AOM3, but its own look.
I struggled with whether I should upload this or not, Civit is already overrun with mixes. I've tried a lot of them actually and most of them just don't do it for me. They tend to be either too generalized, too focused, too airbrushed looking, so on. This mix is capable of much more than what I've posted in the examples, but I want people to see what they can make minimal effort. You don't need a buttload of style tags on your prompt with this model. Most (maybe all) of the images I provided use simple prompts that were made by ChatGPT, along with the suggested positive and negative tags listed below.
It contains bits of the following:
Abyss Orange Mix 3 A3, Darkfruit, Ventidos, Eunpyon, Anylact, Woop10, Liberty, Protogenx58, and a personal mix I tossed together months ago that I no longer have the recipe to, but was also a BWM mess of multiple models. Generally As I'm merging stuff together I think about what model does which thing well, and try to merge the corresponding layers that contain the wanted details in a tiny bit at a time until I get something that looks good. I tried to remember to keep the base alpha and M layer of AOM3A3 the entire time. This is a slow process, there were probably ~50 merges made when putting this together. Was it worth the effort? Probably not, it seems every time I find myself happy with a mix something new gets posted.
*I actually already also have a few variants of this I've tossed together that I'll upload eventually.
Yes you can make hardcore with it, though I haven't really tried beyond a quick prompt or two to verify the capability was indeed there.
Prompting information and settings
Just like AOM3 I suggest your prompts start with at least:
positive: hires, best quality
negative: (low quality:1.4),(worst quality:1.4)
(hyper-realistic digital illustration) in the positive cranks up the detail a bit, but if you're wanting to stay close to anime you might not want that.
I like to use:
postive: (hyper-realistic digital illustration), hires, best quality
negative: (low quality:1.4),(worst quality:1.4), (((watermark))), extra fingers, mutated hands, ((poorly drawn hands)), ((extra limbs)), (extra anuses), (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (fused arm), (fused leg), (conjoined twin), (too many fingers)
Not long ago I didn't believe that negatives could actually help hands but.. this seems to actually work quite well.
Sampler: DPM++ 2m Karras @ 24 steps
CFG Scale: 9-11
Hires Upscaler: 4x-UltraSharp
Denoise Strength: 0.47
As for licensing - assume you can't use this for any commercial purposes whatsoever, since this may contain bits of models that had licensing stipulations.
Packet's Abyss is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user packet. Derived from the powerful Stable Diffusion (SD 1.5) model, Packet's Abyss has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that Packet's Abyss is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, porn, stylized.
With a rating of 4 and over 2 ratings, Packet's Abyss is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of Packet's Abyss from here.
To use Packet's Abyss, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like Packet's Abyss, check out our crash course in AI image generation.
FP16 safetensor. I recommend the kl-f8-anime2 vae, but others seem to work fine. Do not use 'restore faces'.
prompting:
postive: (hyper-realistic digital illustration), hires, best quality
negative: (low quality:1.4),(worst quality:1.4), (((watermark))), extra fingers, mutated hands, ((poorly drawn hands)), ((extra limbs)), (extra anuses), (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (fused arm), (fused leg), (conjoined twin), (too many fingers)
Sampler: DPM++ 2m Karras @ 24 steps
CFG Scale: 9-11
Hires Upscaler: 4x-UltraSharp
Denoise Strength: 0.36 - 0.47