This is going to be a testing bed for three different versions of LoRAs since I've noticed the settings for the prior three LoRAs had made it so that there's little variance between the three versions.
One will be using different Learn-Rates, another will use different DIM/ALPHA and then a third will use both. Whichever I find is most useful will be chosen for the V2s of the Viking, Greek, and Egyptian LoRAs.
UPDATE: Okay, it's only two versions because Prodigy, which was recommended for small datasets, overrides the Learn Rates, so I will just go ahead with the 16-8 Alpha rate.
Go ahead and upload yours!
Your query returned no results β please try removing some filters or trying a different term.