Jemnite
12 months ago

itCameToMeInADream_Mid is a model merge based on Midkemia, which is actually pretty cool. Okay, that's the basics. Rapid fire Q&A time:

  • Q: Why?
    A: neclordx character loras

  • Q: What's with the '(m-da s-tarou:0)' stuff?
    A: I dunno. I saw phoenician use it on the Midkemia examples page and that stuff looked good so I figured I'd try using it too. I have no idea if it does anything. Might be total placebo. Images look okay to me.

  • Q: Schizo CFG?
    A: My life has not been the same ever since I started using perturbed attention guidance...

    So as it turns out the CFG is pretty important when you start using loras. They enhance the power of the lora quite significantly. Since we have CFG++ samplers now, we can really experiment with various CFG powers now that we have samplers that are semi-resistent to CFG artifacting (semi, they are still going to fuck up if you really push the CFG). So now we can also slap the model around with CFG if it isn't cooperating.

  • Q: Some of the images aren't replicating.
    A: I sometimes override the CFG scale for an upscale pass so that may be the hires CFG. If the CFG is set to 7 and you're getting garbage, try pulsing it down to 4 or 2. If that doesn't work it might be your side that has the problem.

    Also obviously you will need all the extensions/nodes/etc, etc too.

  • Q: Schizo steps?
    A: Okay, so because basically all diffusion models are highly trained denoisers, each 'step' or denoising pass chips away at more of the noise. Sometimes we do not want the noise to be chipped away too much because we want to do something fucky with the image. In the example images, we're combining high CFG with the lowlight lora and then cutting the step count off early so the rest of the picture never materializes and it's just dark as hell. This might create a hella ton of artifacts but ideally if we do a bunch of upscalings passes on it it should clear it up. Maybe. Hopefully.

    Probably not.

  • Q: Schedulers?
    A: Just use Align Your Steps for everything. If Nvidia says it's good enough for them, it's good enough for you. Praise Jensen.

    (Exponential seems to be better occasionally for Euler sampling methods but it was not consistent from my experience. If you want to experiment and share your results be my guest)

  • Q: What's with the style variance with the same prompt?
    A: Hires samplers. Euler Neg Dy is really weird about making things all smooth. It's pretty good for fixing hand errors if you don't mind it smooth out everything. You can get it here. CFG++ tends to push the natural qualities of the model even further. Etc, etc. Checkpoint is the same for both, but samplers really matter. My rule of thumb for this one is go Euler Neg Dy if you want to fix details and make it look smoothish, CFG++ if you want it to look more rough and sketchlike.

  • Q: Quality tags?
    A: Most of the time I just stick to score_9. If you're doing something with backgrounds or you want a more 2.5d-ish look run the full pony quality tag string. I've never seen anything use the rating tags on Midkemai, have no idea if they do anything. Not a believer in quality tags in the negatives anymore because they seem to based more on personal preference than any objective quality score. Source_pony in the negatives seems to help make the image more coherent. I don't like excluding any other other sources, source_anime seems to make every gen look samey. But do whatever you feel like.

  • Q: Any other advice?
    A: 18 seems to be the magic step count for this model. Well, 17, but I am racist against Prime Numbers so fuck 17. DPM++ SDE Heun seems to make the backgrounds and image more coherent but semi normalizes the whole thing. That's high accuracy samplers for you. Euler samplers pop off more often but are more hit-and-miss. Running more step counts is not necessarily better but generally speaking after you hit convergence on certain samplers (and by that I mean the ones that can converge) it will only smooth out the details from there. If you want to go for that approach, 28 is a better step count. Euler never converges so be prepared to fuck with step counts. Treat the upscaling pass and the initial pass as separate, sometimes things that look extremely bad become very good in the upscaling pass because the composition was bad even though the details sucked. Sometimes if you get weird red or blue blotches that's a sign that you haven't sampled sufficiently. Hopefully an upscaling pass can fix the colors, otherwise you might want to consider changing seeds or step count.

  • Q: Recommended loras?
    A: Lowlight is good. I love noise offset and I think it will improve almost any gen but be aware that it will fuck with colors. Kazuradrop's kimono changes from green to white, etc. Some style loras work really nice. I like Neisen. Character loras are great. This model is essentially made for maximum compatibility with neclordx's character loras so that's a given. I will link Lowlight and my favorite Neisan lora in the suggested resources. Otherwise just use the civitai linked resources page in the example images and find them yourself.

  • Q: Last words?
    A: This website is unoptimized to shit and just opening it up in browser eats up so much RAM. There are very nice QOL features like the image gallery but just opening up a website should not chug my browser wtf. It's still better than certain other websites that don't even let you download models though.

    Alright that's my unrelated rant over. Download and have fun.

Read more...

What is This Came to Me in a Dream?

This Came to Me in a Dream is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Jemnite. Derived from the powerful Stable Diffusion (Pony) model, This Came to Me in a Dream has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that This Came to Me in a Dream is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as anime, base model, illustration.

With a rating of 0 and over 0 ratings, This Came to Me in a Dream is a popular choice among users for generating high-quality images from text prompts.

Can I download This Came to Me in a Dream?

Yes! You can download the latest version of This Came to Me in a Dream from here.

How to use This Came to Me in a Dream?

To use This Came to Me in a Dream, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like This Came to Me in a Dream, check out our crash course in AI image generation.

Download (6.31 GB) Download available on desktop only
You'll need to use a program like A1111 to run this – learn how in our crash course

Popularity

230 ~10

Info

Base model: Pony

Latest version (DelicateMidH): 1 File

To download these files, please visit this page from a desktop computer.

About this version: DelicateMidH

Faces are way more anime-esque, a little more narrow, a little less wide. Everything looks a lot softer. Less of an oilpainting feel than Mid. As with the last version, you can grab it here without going through Civitai's early access BS:

https://huggingface.co/LMFResearchSociety/CheckpointArchive/blob/main/Jemnite/itCameToMeInADream_DelicateMidH.safetensors

2 Versions

😥 There are no This Came to Me in a Dream DelicateMidH prompts yet!

Go ahead and upload yours!

No results

Your query returned no results – please try removing some filters or trying a different term.