Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get validation loss and display it in Tensorboard #1626

Closed
Zixiu99 opened this issue Jun 10, 2024 · 9 comments
Closed

How to get validation loss and display it in Tensorboard #1626

Zixiu99 opened this issue Jun 10, 2024 · 9 comments
Labels

Comments

@Zixiu99
Copy link

Zixiu99 commented Jun 10, 2024

Screenshot from 2024-06-10 15-36-24
The current project contains only training loss and learning rate curves, how can I modify def train_one_epoch() to compute the validation loss during the training session?

@Zixiu99
Copy link
Author

Zixiu99 commented Jun 10, 2024

Additionally, is there a way to modify the code so that the x-axis of the TensorBoard graphs represents epochs instead of batches?

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Jul 11, 2024
@ReneFiala
Copy link

I would also like to ask if there's a simple solution to this.

  • Getting train loss is simple, the model function in train_utils.py/train_one_epoch() returns all of that.
  • Creating predictions each epoch is also simple by just evaluating after each epoch, as explained here: Evaluation during training #840. This evaluation also returns other metrics like recall and aos.
  • However, I'm not able to get loss values from evaluation - the model function doesn't return it and there doesn't seem to be a built-in utility function that calculates it from the given config, ground truths, and predictions.
  • I'm currently using a custom implementation of GIoU-3D for this (I'm mainly interested in positional loss and less so in classsification loss), but that doesn't seem like the right way to go.

I'm new to object detection networks, so I'm not sure if loss is utilised in the same way as elsewhere. I want to use validation loss for hyperparameter tuning and both train and validation set losses to check for overfitting. Is it more common to use accuracy metrics PCDet provides for this here?

@github-actions github-actions bot removed the stale label Jul 25, 2024
@ReneFiala
Copy link

ReneFiala commented Aug 10, 2024

Okay, I think I figured out how to get validation loss working without custom implementations and other silliness. This applies to the AnchorHead dense head, so your mileage may vary.

  1. Make sure targets are generated even during validation by getting rid of this condition or editing it to always be True. An alternative that doesn't require modifying PCDet's code (which I'm not a fan of) would be to manually call assign_targets() and edit forward_ret_dict from outside, but I haven't looked into obtaining the data_dict parameter. Maybe it's simple.
  2. Call the dense head's get_loss() function after each prediction, for instance here in this simplified bit from eval_utils.py (you still need to output it somewhere to console, Tensorboard, or a file):
losses = []
for i, batch_dict in enumerate(dataloader):
        load_data_to_gpu(batch_dict)
        with torch.no_grad():
                pred_dicts, ret_dict = model(batch_dict)
        # stuff pertaining to pred_dicts and ret_dict omitted
        losses.append(model.dense_head.get_loss()[1]['rpn_loss'])
loss = np.average(losses)

If anyone with more knowledge can chime in whether this is the right way or I'm doing something wrong, I'd be grateful. However, it appears to work.

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Sep 10, 2024
@jyothsna-phd22
Copy link

Hi did it work for u??

@jyothsna-phd22
Copy link

Were u able to compute validation loss during training?

@github-actions github-actions bot removed the stale label Sep 13, 2024
Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Oct 13, 2024
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants