This is a simple 50/50 merge between two of my favorite photorealistic models - Gonzales-NSFW-PonyV1/V2-DMD v2.0 and LomoXL. Many, many thanks to the makers of those excellent models.
I thought I would probably spend hours trying to find the right ratio but I was so happy with just the 50/50 ratio that I couldn't wait to upload it. It has VAE baked in as well as the DMD LoRA so generations are super-quick.
Sampler: LCM
Scheduler: Karras, Exponential or Beta
Steps: 8-12
CFG: 1.0-1.3
Clip Skip: 1 for a more analog look and 2+ to remove the analog effect
Try this workflow to get started, if you need it.
Gonzales is a Pony model so GonzaLomo inherits all of those capabilties. LomoXL has a very strong analog feel to it which helps give an edge to the Gonzales' "poniness".
Enjoy!
P.S. This is my first model upload so I would appreciate any constructive feedback/comments.
GonzaLomo DMD is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user GBRX. Derived from the powerful Stable Diffusion (SDXL 1.0) model, GonzaLomo DMD has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that GonzaLomo DMD is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as character, photorealistic.
With a rating of 0 and over 0 ratings, GonzaLomo DMD is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of GonzaLomo DMD from here.
To use GonzaLomo DMD, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like GonzaLomo DMD, check out our crash course in AI image generation.
** This model may only work in ComfyUI as it requires the Perturbed Attention Guidance node.
This is an experimental merge between my GonzaLomo v4.0 non-DMD model and BigASP v2.5. I wouldn't say it's better than either of those two models. This is just an attempt to try to incorporate some of the great attributes of BigASP into my models.
BigASP provides superior color, composition and lighting - it's really like an SDXL version of Flux - but also not quite compatible with other SDXL models. That's why this is purely experimental.
This is not a DMD model. Here are the settings that seem to work best for me:
Steps: 20-30
CFG: 1.5-2.5
Sampler: Euler (Euler A also works well if you turn PAG down to 1.0)
Scheduler: Exponential
Perturbed Attention Guidance: 3.0
Model Sampling SD3: Does not work - unless you prefer your images to look like gray mushy oatmeal
Note that the images in the model showcase have been created using my refiner workflow for BigASP 2.5 so they've been upscaled and refined my GonzaLomo v4.0
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.