-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about the use of ExponentialMovingAverage #15
Comments
Hi sevennotmouse,
Is |
Thanks for your reply, let me add a clarification to the code.
The evaluate function is as follows:
At the end of each epoch during training, I execute the The results are as follows:
To summarize, I have two questions:
Looking forward to your reply! |
here is my code:
from torch_ema import ExponentialMovingAverage
model = ...
optimizer = ...
scheduler = ...
ema_model = ExponentialMovingAverage(parameters=pg, decay=0.9999)
As shown in the code, I will execute the evaluate function on the validation set after each round of training. I found that the validation results are exactly the same when I set different decay values, why is that?
The evaluate function is as follows:
@torch.no_grad()
def evaluate(model, data_loader, device, epoch):
softceloss_function = SoftCrossEntropy()
model.eval()
data_loader = tqdm(data_loader)
for step, data in enumerate(data_loader):
images, names, labels = data
pred = model(images.to(device))
softlabel = softlabel_function(labels) # a function to convert labels to softlabel
loss = softceloss_function(pred, softlabel.to(device))
val_loss,val_MAE = ... # calculate loss and MAE
The text was updated successfully, but these errors were encountered: