Astigmatism (formerly 'Semantic Shift')

Astigmatic Correction 0.1
sirrece
8 months ago

17-01-2025 START

Ok, so the newest astigmatism positive, +0.6 is here. It's really really good, but as in all things, I recommend blending with 0.5 to attenuate overfitting and truly get the best results possible. I'll look at a lora merge later and see if I can make an easy package with an "optimal" astigmatism at this stage.

Hope you all enjoy. I'm working on a really large negative for 0.6, but I need more buzz so it will take a little bit to train, but rest assured, it is on the way, and I think it will be quite a big jump.

17-01-2025 END





---

Post 0.5b, I recommend just playing with 0.5b and/or -0.5b.

When using the negative, be sure to crank CFG up to start, as this is the main advantage it affords you.

It also, in small amounts, can increase creativity, but broadly, +0.5b is the powerhouse, despite having a much smaller dataset

below is stuff I wrote previously for anything pre 0.5b:
---------------------
I recommend the following mix for anyone starting out (I will release some sort of a mixed LoRa sometime in the next week that will require less VRAM than loading 4 LoRas lol)

Astigmatism +0.5
Astigmatism -0.5
Astigmatism +0.4b
Astigmatism -0.2

The +'s at 0.33 each
The -'s at -0.33 each

This has to do with overfitting in the training process, and errors on my part. Rather than address these errors with limited resources directly (which I cannot do as this would require many many iterations of the LoRas that I cannot afford, in order to test and find the optimal setups) using blends mitigates overfitting and generally improves perfermance, as you can see from the plethora of merge Checkpoints on Civitai, including ones which simply merge newer versions of a model into the older version.

Basically, older versions may "understand" something better than a newer version, and vice versa, but as long as your version are MOSTLY improved, then the merge process will over time lead to the model becoming a better generalizer, and this particular lora, which directly target generalization and capabilities of the model, is no exception.

Love yall, and this community.


If anyone wants to collaborate on further training who has the resources, please contact me. I have had a great deal of success in improving prompt adherence and I suspect this can be massively grown with a solid community effort.


Carefully examine the weights used to know how to mess with this LoRa. Think of it like adjusting the fooocu on a lens that you are looking through. Every prompt and Checkpoint combination will have different needs, but ultimately, most of them can be dialed in such that adherence begins to work within a certain delimiter where it wasn't working previously.

I suppose I will have to do a video on the "Why" behind this soon as my adhd and time constraints make writing it, as I want to, beyond my current capacity. But a video, I probably can do, although it will be... chaotic.




This model is based on work I did on my "Unsettling" Lora. It uses some of the images generated there, along with subsequent images made using, again, the LoRa progeny of those, as well as the techniques I experimented with.

Basically, the goal of this LoRa is to "semantically shift" SDXL such that terms that have a set meaning are entirely changed in an internally consistent manner. I used a technique to do this partially in the Unsettling LoRa, although it was overtrained, and became intrigued by the idea that "good" prompts remain "good," albeit on a different axis, even if the internal understanding of them "shifts" within a given model. In other words: a unique and interesting prompt can create unique and interesting images in multiple new and unique themes if you play with the brain of the model in a directed way.

How did I do this?

I found areas of overtraining within SDXL and targeted them. Mona Lisa, Pillars of Creation, etc, and I redirected them to new images. As I suspected, this had ripple effects in the way the entire model perceives the concepts connected to the images modified, and these effects are quite substantial.


UPDATE

Since this started, the purpose of this LoRa has changed substantially to basically helping improve SDXL's overall prompt adherence and winrate, while using very small training datasets targeting the areas of overfitting in the model and teaching it to generalize them.

A side effect of this is that it is a lot easier to produce images at arbitrary resolutions.

Read more...
Download (332 MB) Download available on desktop only

Popularity

3k ~10

Info

Base model: SDXL 1.0

Version Astigmatic Correction 0.1: 1 File

To download these files, please visit this page from a desktop computer.

About this version: Astigmatic Correction 0.1

This is my first attempt at using my method of diversifying overtrained areas in the core SDXL model to try to coax out more "intelligence" in the model.

This obviously is challenging to do with LoRas, and a checkpoint fine-tune would be a LOT more effective, but mostly I'm just testing out the idea and seeing what can happen with this.


This first LoRa is just 45 images, and the correction targets only 10 specific overtrained areas, with each prompt getting a few images to diversify its output:

Pillars of Creation

The Birth of Venus

Flaming June

Irises in Monet's Garden

A Sunday Afternoon on the Island of La Grande Jatte

Girl with a Pearl Earring

The Mona Lisa

Abbey Road

Liberty Leading the People

The Scream

Note that this one wasn't semantically perfect, or rather, the correction wasn't as strict as I'd like to apply in the future if I were to actually see what I can do with this method in terms of improving prompt understanding. It's my thought that these overtrained images directly affect the semantic understanding of the model, which can be partially corrected for.

In this case, the images included tried to keep elements of the semantic meaning of the titles above, but in some cases this may not have been as strict or obvious as I'd like in order to guide the model the way I want to.

15 Versions

😥 There are no Astigmatism (formerly 'Semantic Shift') Astigmatic Correction 0.1 prompts yet!

Go ahead and upload yours!

No results

Your query returned no results – please try removing some filters or trying a different term.