The training dataset consisted of artwork where each pixel was equal to 8x8. This is important because if you need "perfect" pixels, I recommend RESIZE at 0.125, and then RESIZE at 8.000, so that the pixels acquire their true nature. Use the Nearest Neighbor method when resizing.
I expanded the dataset to 281 images (there were about 50 in the first version). The variety of possible generations has increased significantly, but the pixels are still not perfect. And you know what? I intend to review absolutely all the images in the dataset because, in my opinion, the problem might lie in hidden imperfections. If even one of the images has a slight gradient or a broken pixel, the entire training process could be ruined. But what terrifies me even more is that I can't find a similar "pixelate+" tool. For example, in paint.net, there is a "pixelate+" effect, and the pixelation method there is so good that the image is perfectly pixelated almost without changing. And I'm very sorry that I haven't been able to find a similar tool in Comfy yet. If you use the RESIZE method like I do, I have bad news for youāthis method is not ideal. At least, that's how it was in my tests. Our conditions might differ, and in reality, everything might work out great for you.
Frankly, it turned out not quite what I want to achieve in the next version.
You can wait for the release of version 2, it will be more worthy than the current one.
If you are not scared by the words above, then know that the test LORA is strange, over-trained and tends to draw much more often what it was trained on than to match text prompts, keep this in mind.
You will encounter many swords, strange books, Easter Island statues during exploitation and, most importantly: GigaChad (even a girl can acquire his facial features, but this is uncontrollable (in any case, I didn't try)).
I trained this version using datasets v4 and v5, but most importantly, I wanted to learn something new for myself. Instead of the usual 12 epochs and 1 repeat, I did the opposite: 1 epoch and 12 repeats. The results personally make me happyāitās as if the generalization of epochs didnāt improve things, and the final output doesnāt look like a clone of the dataset, even though there were only 723 images in it.
Just in case, I recommend using my parameters for generation: Euler_a, simple, checkpoint: PlantMilkSuite_walunt. If you want to add your own LoRA, think twiceāwas it trained with any kind of smoothing? Pixels might just vanish under the pressure of blurring. Itās probably best to avoid that LoRA if it kills pixels.
Also, I should note that even though you can activate LoRA with any checkpoint, most of the ones Iāve come across donāt work well with my model. Keep that in mind.
Unfortunately for those who use artist tags in their prompts, those tags will likely blur your image too. The best solution is to avoid artist tags altogether. My recommendation for prompts: start with "pixpix, 8-bit, pixel_art" and end with "masterprice"āthis way, the image stays sharp, and the pixels wonāt die.
Now, onto a more sensitive topic for me. My model seems to be growing like yeast, judging by CivitAI stats. I have plenty of ideas for implementation and further development, but there are a lot of nuances Iāll get into now.
First, Iām going crazy from the lack of speed, so if any of you want to help with your computing power, message meāI respond to everyone.
Second, Iāve tested many base models for training. Trust me, Illustrious is not the best model for pixel art. Iāve tried SDXL, PONY, and Illustrious, and hereās what I found:
Pony is a fascinating caseāmaybe its understanding of the world isnāt perfect, but its artistic output is impressive, and its biggest strength is how well it absorbs materials. Pony is the closest any model has come to the "GameDev" space, which is crucial. Of course, Pony is far behind Illustrious when it comes to NSFW content. Illustrious knows anime and characters well but struggles with backgrounds compared to Pony.
As for SDXL? I donāt even know what to take from it. It turned out to be too complex for me, so I honestly donāt know what to do with it. I might just upload it for funāyou can play around with it yourselves.
By the way, Iād be incredibly grateful if any of you published your work using my LoRAs. Itās important to meāthis way, I can see what weāre achieving together. Itās one thing when I get good results, but itās another when you should too. Though, in half the cases, the generations donāt turn out so well. Maybe itās because I only publish the top 10% of what I generate? Who knows.
I didnāt have a specific goal in mind for this training, like "training for game graphics." I just wanted to throw together a mix of images and generalize the essence of pixel art. But this topic has gone much further than I expected. My understanding of the situation could probably solve the entire problem of AI-generated pixel art, but as Iāve said before, I lack the hardware to pull me out of despair.
My current models exist only because I once got lucky and bought about 20,000 buzz from a friend. Thatās why Iām training all my models on CivitAI right now, but even the smallest training costs 500 buzz, which is quite expensive. And there are so many things I want to test.
The saddest part is that I canāt upload more than 1,000 images to the site for training. This could only be solved by switching to local training, so if anyone wants to helpāmessage me. Maybe I can train through you.
Be sure to leave your comments under the modelāI appreciate them, especially critical feedback on the current state of the model.
Go ahead and upload yours!
Your query returned no results ā please try removing some filters or trying a different term.