There will be ONE final version AKA Stage 3; currently training with the standard 1 mil image pack that seems so effective at fitting models.
That will be the BigAsp VPRED Solidifed, but for now it's just cooking.
https://huggingface.co/AbstractPhil/OMEGA-BIGASP/tree/main
You can find the stage 1 and stage 2 loras here; grab the most recent and make the clip mergers yourself if you like; or simply merge BigAsp clips yourself and come up with a better mix than I did.
I did NOT train the clips, as my clips and the BigAsp clips are very different; so I opted to train the UNET using half of one of mine, and the rest BigAsp clips. Essentially this is heavily finetuned using BigAsp; with Omega Clip_L as a finetune controller.
Omega Clip_L has well over 100 million samples trained, so if there is an expert, that would be the grandmaster.
Yeah don't.... don't type things into this unless you really want to see them.
BigAsp2 is fucking wild.
This thing does not conform to any standard deviations.
It does not comply with standard finetune options.
It completely ignores finetune training at times.
Even simple finetune data can destroy LARGE amounts of what was trained into it.
Converting it to vpred involved training only noise timesteps with similar and yet divergent realistic data.
This was a semi-successful and very low cost conversion, which is pretty cool.
50/50 (Balanced) Chef's choice is the 50/50 with Omega and BigAsp2. This has the best controllers, but if you want more wild you can go down the chain to the AspHeavy, or if you're feeling exceptionally masochistic go ahead and grab the full Refit before merger.
25/75 (OmegaHeavy) is pretty good for OmegaHeavy, as Omega is quite stable and capable of almost anything.
75/25 (AspHeavy) BigAsp is fun but also fairly untamed and has very bad capability at counting.
I'd say this model is FAR from a plain English expert; but it's a great VPRED conversion prototype showcasing the power of minimal training using Omega clips.
Disappointing as it was, I expected to use these BigAsp clips to teach plain English to Omega, and the opposite happened. Omega was lobotomized, so that training was halted and the BigAsp training was let complete instead.
A refitted and finetuned version of BigAsp2; repaired and refitted to VPRED for utilization.
Atop of the refit the V1 merge is;
50/50 OmegaV0001 clips and BigAsp Clips
Finetune trained with;
Stage 1-> 80,000 samples, middle timesteps trained
OmegaAsp CLIP_L_1 = BigAsp CLIP_L 50/50 OmegaSim CLIP_L
Frozen
BigAsp CLIP_G
Frozen
Stage 2 -> 200,000...??? samples, i lost track honestly
OmegaAsp CLIP_L_2 = OmegaAsp_CLIP_L_1 -> BigAsp CLIP_L 50/50 (75% BigAsp now)
Frozen
BigAsp CLIP_G
Frozen
Refitted with frozen clips using the Sim Omega 73 clip_l and clip_g.
This both introduced many safe elements that otherwise don't work in BigAsp, as well as destroyed many of the NSFW elements that completely ruined generations during the conversion.
The 3 versions are intended to be for those who enjoy the original or the new more.
BigAsp2-X-SimOmega vpred is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user AbstractPhila. Derived from the powerful Stable Diffusion (SDXL 1.0) model, BigAsp2-X-SimOmega vpred has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that BigAsp2-X-SimOmega vpred is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as porn, base model, nsfw.
With a rating of 0 and over 0 ratings, BigAsp2-X-SimOmega vpred is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of BigAsp2-X-SimOmega vpred from here.
To use BigAsp2-X-SimOmega vpred, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like BigAsp2-X-SimOmega vpred, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.