Warning: Although these quants work perfectly with ComfyUI - I couldn't get them to work with Forge UI yet. Let me know if this changes. The original non-k quants can be found HERE which are verified working with Forge UI.
[Note: Unzip the download to get the GGUF. Civit doesn't support it natively, hence this workaround]
These are the K(_M) quants for HyperFlux 8-steps. The K quants are slightly more precise and performant than non-K quants. HyperFlux is a merge of Flux.D with the 8-step HyperSD LoRA from ByteDance - turned into GGUF. As a result, you get an ultra-memory efficient and fast DEV (CFG sensitive) model that generates fully denoised images with just 8 steps while consuming ~6.2 GB VRAM (for the Q4_0 quant).
It can be used in ComfyUI with this custom node. But I couldn't get these to work with Forge UI. See https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050 for where to download the VAE, clip_l and t5xxl models.
Much better quality: you get much better quality and expressiveness at 8 steps compared to Schnell models like FastFlux
CFG/Guidance Sensitivity: Since this is a DEV model, unlike the Hybrid models, you get full (distilled) CFG sensitivity - i.e., you can control prompt sensitivity vs. creativity and softness vs. saturation.
Fully compatible with Dev LoRAs, better than the compatibility of Schnell models.
The only disadvantage: needs 8-step for best quality. But then, you'd probably try at least 8 steps for best results with Schnell anyway.
[Current situation: Using the updated Comfy UI (GGUF node) I can run Q6_K on my 11GB 1080ti.]
Download the one that fits in your VRAM. The additional inference cost is quite small if the model fits in the GPU. Size order is Q2 < Q3 < Q4 < Q5 < Q6. I wouldn't recommend Q2 and Q3 unless you absolutely cannot fit the model in memory.
All the license terms associated with Flux.1 Dev apply.
PS: Credit goes to ByteDance for the HyperSD Flux 8-steps LoRA which can be found at https://huggingface.co/ByteDance/Hyper-SD/tree/main
GGUF_K: HyperFlux 8-Steps K_M Quants is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user nakif0968. Derived from the powerful Stable Diffusion (Flux.1 D) model, GGUF_K: HyperFlux 8-Steps K_M Quants has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that GGUF_K: HyperFlux 8-Steps K_M Quants is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as base model, basemodel, hypersd.
With a rating of 0 and over 0 ratings, GGUF_K: HyperFlux 8-Steps K_M Quants is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of GGUF_K: HyperFlux 8-Steps K_M Quants from here.
To use GGUF_K: HyperFlux 8-Steps K_M Quants, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like GGUF_K: HyperFlux 8-Steps K_M Quants, check out our crash course in AI image generation.
Go ahead and upload yours!
Your query returned no results – please try removing some filters or trying a different term.