QuakeSkin Diffusion is trained on UV unwrapped texture layouts for the Quake 1 Ranger model and can generate a wide variety of novel character types not seen in the small dataset (12 random skins from https://www.moddb.com/games/quake/addons/quake-1-skins-pack). The training was run until the model learned the concept without reproducing any of the dataset examples.
https://www.dropbox.com/s/3p06uurglozq8ya/QuakeSkin_Input.png?dl=0
To ensure the correct layout use the QuakeSkin_Input.png in img2img at 512x512 resolution. In my testing I used euler_a sampler @ 50 steps and CFG 12 with a Denoising Strength of 0.75.
Prompt your subject followed by trigger words in demoura artstyle
The front and back usually match up but not every time so generate in batches to pick the best ones.
To finish the skin you can feed the result back into img2img at 1024x1024 resolution, using similar or more steps + CFG, with a Denoising Strength of 0.2 - 0.3 (as sometimes it might lose the back of the head and do two front faces)
To view your skins correctly on a player model, find and download Ranger (A-posed & Rigged)
model and scale the UV map in the Y dimension until the textures fit properly
QuakeSkin Diffusion is a highly specialized Image generation AI Model of type Safetensors / Checkpoint AI Model created by AI community user _hackmans_. Derived from the powerful Stable Diffusion (SD 1.5) model, QuakeSkin Diffusion has undergone an extensive fine-tuning process, leveraging the power of a dataset consisting of images generated by other AI models or user-contributed data. This fine-tuning process ensures that QuakeSkin Diffusion is capable of generating images that are highly relevant to the specific use-cases it was designed for, such as game art, game asset, video game.
With a rating of 5 and over 1 ratings, QuakeSkin Diffusion is a popular choice among users for generating high-quality images from text prompts.
Yes! You can download the latest version of QuakeSkin Diffusion from here.
To use QuakeSkin Diffusion, download the model checkpoint file and set up an UI for running Stable Diffusion models (for example, AUTOMATIC1111). Then, provide the model with a detailed text prompt to generate an image. Experiment with different prompts and settings to achieve the desired results. If this sounds a bit complicated, check out our initial guide to Stable Diffusion – it might be of help. And if you really want to dive deep into AI image generation and understand how set up AUTOMATIC1111 to use Safetensors / Checkpoint AI Models like QuakeSkin Diffusion, check out our crash course in AI image generation.