V1.1 changelog:
train on 1.0 new dataset to integrate new linguistic depth and to add detail and precision
known issues: from version 1 onwards there are problems with hands and some anatomical parts. version 1.1 tries to fix some of these issues still present but less evident from v1
V1 changelog:
training with large dataset and prompt from llm, accepts a t5xxl style as well as the usual tags
best with : Deepshrink and Perturbed Attention Guidance
unfortunately to alter the clip and model to integrate some sort of t5xxl made it more unstable and needs more control over the generation so I recommend using comfyui and take example from the nodes from my images
V1 BETA2 changelog:
model completely redesigned from scratch
advanced training for fashion, photography and cinematic
merge DPO
beta features: lettering and magazine cover
trigger for magazine cover :
magazine cover
trigger for trained covers:
SURFACE
XIOX
DIOR
PAPER
ID
HERO
GQ
ADIDAS
VIPER
PORTER
HARRODS
LOVE WAN
V
INTERVIEW
THE FACE
TEENVOG
HARPER'S BAZAAR
ASOS
WONDERLAND
DAZED
MIXMAG
NUMERO
WOMAN
MOEVIR
ALLURE
WAD
ROSALIA
FASHION
COSMOPOLITAN
WALLPAPER
BILLBOARD
W
L'OFFICEL
GLAMOUR
VOGUE
FISHEYE
ELLE
TIME
VOUGE
PLAYBOY
P.S.: preferably use karras or AYS as scheduler
V0. 97b changelog:
Cinematic/sci-fi training 25 epochs 67000 steps
Fashion training 20 epochs 43000 steps
Detail + Unet training 10 epochs 70000 steps
+ DPO merge
CosXL versions :
Cosine-Continuous Stable Diffusion XL is a new stability.ai experimental SD model. The most notable feature of this schedule is its capacity to fix tonality issues with SD models and produce the full color range from 'pitch black' to 'pure white', performing much better than noise offset or LoRA solutions.
0.95c changelog:
more photographic, fix hands, fix eyes
close-up photography
better skin details
better raytracing
finetuned on new dataset
0.9 changelog:
more graphic (added 4 trained loRAs)
finetuned on best results of 0.75
0.75 changelog:
more sci-fi
more cyberpunk
more cinematic
Introducing SDXL HK 0.9, the versatile amalgamation of cutting-edge advancements! This iteration pushes the boundaries of realism and sci-fi, evolving into a powerhouse of innovation. What's new in this version?
Enhanced Graphics: We've amplified the visual experience by incorporating seven meticulously trained LoRAs, enriching the model's graphic capabilities to create more immersive content.
Fine-tuning: Building on the success of the previous version (0.75)
What set apart SDXL HK 0.75?
Sci-Fi Infusion: Infusing more elements of science fiction, the model transcends traditional boundaries, weaving intricate and compelling narratives into its creations.
Cyberpunk Vibe: Embracing the essence of cyberpunk, this version delivers a futuristic, edgy feel, opening doors to a plethora of creative possibilities.
Cinematic: Elevating the cinematic experience, the model crafts visuals that resonate with cinematic grandeur, bringing stories to life in an immersive manner.
Merged Model: SDXL HK 0.75 was built on a model by Afroman4peace. Merges base, refiner, some LorRAs, and the model by Afroman4peace. Additionally, it underwent a second merge with two finely-tuned LoRA models specialized in photographic content.
Maintaining Consistency: The primary goal behind this model's creation is to uphold a consistent quality standard alongside my LoRA models, ensuring coherence and reliability across various creative endeavors.
SDXL HK is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user Ciro_Negrogni. Derived from the powerful Stable Diffusion (SDXL 1.0) model, SDXL HK has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that SDXL HK is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as base model, turbo, stablediffusion xl.
With a rating of 0 and over 0 ratings, SDXL HK is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of SDXL HK from here.
To use SDXL HK, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like SDXL HK, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.