Skip to content

Is there an explanation for training new adapters? #1

Answered by scarbain
Njasa2k asked this question in Q&A
Discussion options

You must be logged in to vote

According to their paper, they trained each adapter for 10 epochs with a batch size of 8 on at least 120K images.
I've managed to run the training with a batch size of only 1 with 12GB VRAM.

Replies: 1 comment 7 replies

Comment options

You must be logged in to vote
7 replies
@scarbain
Comment options

@xinntao
Comment options

@scarbain
Comment options

@Saquib764
Comment options

@zhanghh-031102
Comment options

Answer selected by Njasa2k
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants