This is an efficient and powerful ComfyUI workflow designed for high-quality image outpainting using the Qwen Image Edit model. Outpainting allows you to extend an image beyond its original borders, seamlessly generating new content that matches the style, lighting, and context of the original. This workflow is optimized for speed and simplicity, delivering impressive results in just a few steps.
Core Concept: Upload an image, and the AI will intelligently expand the canvas, generating plausible and coherent extensions of the scene while perfectly blending the new content with the original.
π€ Specialized Model: Utilizes the Qwen_Image_Edit-Q5_0.gguf model, specifically fine-tuned for image editing tasks like outpainting and inpainting.
β‘ Lightning Fast: Integrates the Qwen-Image-Edit-Lightning LoRA, enabling high-quality outpainting results in only 8 sampling steps.
π― Precision Prompting: Uses dedicated TextEncodeQwenImageEdit nodes that understand the image context, ensuring the model follows instructions for seamless extension.
πΌοΈ Automated Preprocessing: The workflow automatically pads your image to create space for outpainting and scales it to an optimal size for the model.
π§ Optimized Pipeline: Pre-configured with expert negative prompts and optimal settings (CFG, sampler) for outpainting, so you get great results by default.
π One-Click Operation: Just load your image and run the workflow. No complicated settings need to be adjusted.
The workflow is neatly grouped into logical sections for easy understanding and customization:
Step1 - Load models: Loads the main Qwen Image Edit model, its specialized CLIP vision encoder, and the VAE.
Step 2 - Upload image for editing: Loads your input image and preprocesses it for outpainting (padding and scaling).
Step 3 - Prompt: Where the AI is given instructions on how to outpaint the image, with pre-written positive and negative prompts for optimal results.
Sampling & Decoding: The KSampler runs for 8 steps with Euler, and the VAE decodes the latents into the final outpainted image.
Image Output: The SaveImage node saves the final result.
Download & Install:
Ensure you have ComfyUI Manager to easily install missing custom nodes.
Required Custom Nodes: ComfyUI-GGUF (for loading the .gguf models).
Download the .json file from this post.
Load the Models:
Main Model: Place Qwen_Image_Edit-Q5_0.gguf in your ComfyUI/models/gguf/ folder.
CLIP Model: Place qwen2.5-vl-7b-it-q4_0.gguf in the same gguf/ folder.
VAE: The workflow points to qwen_image_vae.safetensors. Ensure it's in your models/vae/ folder.
LoRA: Place Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors in your models/loras/ folder. Adjust the path in the LoraLoader node if yours is in a subfolder (e.g., qwen_loras/).
Load Your Image:
In the LoadImage node, change the image name to your own file (e.g., my_landscape.jpg).
Customize the Outpainting (Optional):
The positive prompt is pre-configured for general outpainting. For specific requests (e.g., "extend the garden and add a fountain"), you can modify the text in the Positive Prompt node (TextEncodeQwenImageEdit).
Run the Workflow:
Queue the prompt in ComfyUI. The final image will be saved to your ComfyUI/output/ folder.
NodePurposeKey SettingsLoaderGGUFLoads the main Qwen Image Edit model.Qwen_Image_Edit-Q5_0.ggufClipLoaderGGUFLoads the Qwen Vision encoder.qwen2.5-vl-7b-it-q4_0.ggufVAELoaderLoads the Qwen VAE for encoding/decoding.qwen_image_vae.safetensorsLoraLoaderModelOnlyApplies the Lightning LoRA for fast sampling.Strength: 1.0LoadImageLoads your input image.ImagePadForOutpaintCore node. Adds transparent padding around the image for the AI to fill.Left/Right: 384, Top/Bottom: 0, Feather: 48ImageScaleToTotalPixelsScales the padded image to an optimal size for the model.Megapixels: 0.9TextEncodeQwenImageEditPositive Prompt: Instructs the model on how to extend the image.TextEncodeQwenImageEditNegative Prompt: Instructs the model on what to avoid (seams, artifacts).VAEEncodeEncodes the scaled image into the latent space.ModelSamplingAuraFlowConfigures the model for Aura Flow sampling.Shift: 3.0CFGNormPatches the model for CFG.Strength: 1.0KSamplerPerforms the outpainting denoising process.Steps: 8, Sampler: euler, CFG: 1.0, Denoise: 0.95VAEDecodeDecodes the final latents back into an image.SaveImageSaves the finished, outpainted image.
Image Choice: Start with images that have a clear and continuable background (e.g., skies, water, walls, fields). Results are most seamless when the AI has clear patterns to continue.
Prompt Guidance: The provided positive prompt is excellent for general use. For more creative control, try instructions like: "Extend the forest and add a path on the right," or "Continue the architecture in the same Gothic style."
Padding Settings: The ImagePadForOutpaint node is set to add 384 pixels to the left and right. You can adjust these values (e.g., 512, 256) to control how much the image is expanded in each direction.
Denoise Strength: The denoise value of 0.95 means the original image will be largely preserved. Lower values (e.g., 0.8) will preserve it more but may be less creative; higher values give the AI more freedom but risk altering the original.
Qwen_Image_Edit-Q5_0.gguf: https://huggingface.co/QuantStack/Qwen-Image-Edit-GGUF
qwen2.5-vl-7b-it-q4_0.gguf: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders
qwen_image_vae.safetensors: https://huggingface.co/QuantStack/Qwen-Image-Edit-GGUF
Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors: https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main
This workflow showcases the incredible capability of the Qwen Image Edit model for outpainting tasks. It removes the technical barrier, providing a streamlined, one-click solution to creatively extend your images while maintaining perfect consistency. It's a must-try for photographers, digital artists, and anyone looking to expand their creative canvas.
If you use this workflow, please share your results! I'd love to see what you create.
Go ahead and upload yours!
Your query returned no results β please try removing some filters or trying a different term.