Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

感谢您的工作!有一些困惑请教! #33

Open
rtfgithub opened this issue Mar 11, 2024 · 2 comments
Open

感谢您的工作!有一些困惑请教! #33

rtfgithub opened this issue Mar 11, 2024 · 2 comments

Comments

@rtfgithub
Copy link

在代码中,第一阶段的训练中image encoder是冻结的,可学习的text tokens和和text encoder是可学习的。这和论文里描述的只有text tokens是可学习的,image encoder和text encoder是冻结的不匹配呀。

@DRACOyu
Copy link

DRACOyu commented Apr 2, 2024

hello,stage1,text encoder is fix,
def make_optimizer_1stage(cfg, model):
params = []
keys = []
for key, value in model.named_parameters():
if "prompt_learner" in key:
lr = cfg.SOLVER.STAGE1.BASE_LR
weight_decay = cfg.SOLVER.STAGE1.WEIGHT_DECAY
params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}]
keys += [key]
if cfg.SOLVER.STAGE1.OPTIMIZER_NAME == 'SGD':
optimizer = getattr(torch.optim, cfg.SOLVER.STAGE1.OPTIMIZER_NAME)(params, momentum=cfg.SOLVER.STAGE1.MOMENTUM)
elif cfg.SOLVER.STAGE1.OPTIMIZER_NAME == 'AdamW':
optimizer = torch.optim.AdamW(params, lr=cfg.SOLVER.STAGE1.BASE_LR, weight_decay=cfg.SOLVER.STAGE1.WEIGHT_DECAY)
else:
optimizer = getattr(torch.optim, cfg.SOLVER.STAGE1.OPTIMIZER_NAME)(params)
return optimizer

@cyberblue1
Copy link

updated image encoder and text encoder do not need to match , it only takes updated image encoder to test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants