Super Simple GGUF (Quantized) Flux LoRA Workflow

Super Simple
ArsMachina
about 1 year ago

If your VRAM is insufficient for Flux, you need to run a quantized version. This is a really simple workflow with LoRA load and upscale. Keep in mind that the quantized versions need slightly higher strength values than the normal ones.

This workflow is based on the GGUF model loader in ComfyUI:
https://github.com/city96/ComfyUI-GGUF.

Update:

Added upgraded "Simple" version. It will requre 2 Custom Nodes to be installed. What is differrent in it:

  1. Added Multi-LoRA suppor with the rgthree LoRA stacker. This is the best pick for low end video cards I've been able to find.

  2. Added Civit-AI friendly file saver with the requred supporting nodes.

  3. Orginised everything in groups a little bit.

It is still reqlly easy to use and now it is a good starting point for more complex Workflows as the generation info will be saved for Civit even if you do more complex operations.

Read more...
Download (2.07 KB) Download available on desktop only

Popularity

1k ~10

Info

Base model: Flux.1 D

Latest version (Super Simple): 1 File

To download these files, please visit this page from a desktop computer.

2 Versions

πŸ˜₯ There are no Super Simple GGUF (Quantized) Flux LoRA Workflow Super Simple prompts yet!

Go ahead and upload yours!

No results

Your query returned no results – please try removing some filters or trying a different term.