-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get validation loss and display it in Tensorboard #1626
Comments
Additionally, is there a way to modify the code so that the x-axis of the TensorBoard graphs represents epochs instead of batches? |
This issue is stale because it has been open for 30 days with no activity. |
I would also like to ask if there's a simple solution to this.
I'm new to object detection networks, so I'm not sure if loss is utilised in the same way as elsewhere. I want to use validation loss for hyperparameter tuning and both train and validation set losses to check for overfitting. Is it more common to use accuracy metrics PCDet provides for this here? |
Okay, I think I figured out how to get validation loss working without custom implementations and other silliness. This applies to the AnchorHead dense head, so your mileage may vary.
losses = []
for i, batch_dict in enumerate(dataloader):
load_data_to_gpu(batch_dict)
with torch.no_grad():
pred_dicts, ret_dict = model(batch_dict)
# stuff pertaining to pred_dicts and ret_dict omitted
losses.append(model.dense_head.get_loss()[1]['rpn_loss'])
loss = np.average(losses) If anyone with more knowledge can chime in whether this is the right way or I'm doing something wrong, I'd be grateful. However, it appears to work. |
This issue is stale because it has been open for 30 days with no activity. |
Hi did it work for u?? |
Were u able to compute validation loss during training? |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
The current project contains only training loss and learning rate curves, how can I modify def train_one_epoch() to compute the validation loss during the training session?
The text was updated successfully, but these errors were encountered: