[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers

Search:


View post   

>> No.62668092 [View]
File: 647 KB, 2048x1768, xyz_grid-0237--3621527931-amoralumina.jpg [View same] [iqdb] [saucenao] [google]
62668092

A few threads ago I mentioned that NAI has a different training method for characters, it seems I was correct and kohya also referred to it in the Finetune trainer documentation.
They might have improved it now and they are using a new base model but trying their old method shows some nice results.
What their method does is instead of training a lora, they finetune the whole model on a set of images. Then the lora can be extracted from the finetuned model by making a difference with the base one used for training.
Weirdly, even with my very bad settings which overcooked the model after 8/20 epochs the results look promising, with the character learned well and being flexible. I need to test it some more but this looks nice 1st try.

Navigation
View posts[+24][+48][+96]