-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training metrics #100
Comments
? just calculate accuracy in training_step. you can do whatever in there, it’s not just for the loss |
I think the problem here is that if metrics are caculated in training_step, it is only calculated for one batch. I need to tweak the code as @rcmalli did to aggregate for the whole epoch. Can we have a function called training_end where we can calculate metrics for the whole epoch ? (Something similar to validation_end but for training) |
@minhptx Did you implement this? I also want to collect my training metrics after each epoch but as far as I understood the new method |
I'm also interested in such a feature. It took me a little while to understand that |
@Jonathan-LeRoux I'm in the same boat.. It is super misleading that Continuing this discussion @williamFalcon, I think this thread's name is misleading. There's absolutely no reason for lightning to automatically calculate accuracy. On the other hand, it would be super useful if lightning could keep the list of Correct me if I'm wrong, but the only way to calculate these metrics is for me to save a state of (y_hat, target) throughout the entire epoch and calculate metrics at certain points. My point is, if I am not supposed to keep state to track validation metrics why would we break that philosophy with the training metrics? edit: |
@captainvera have you check recent changes in #776 #889 #950 |
@captainvera May I ask how you compute metrics like |
Should we have training accuracy calculation automated?
Currently I am handling like this
The text was updated successfully, but these errors were encountered: