You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing this project! I followed the instruction to implement TMRNet in two-step (first to train the single resnet_lstm for LFB, the second to train the whole TMRNet), as well as the pre-processing. In training, there is always a large gap between the training and test performance. For example, in the 2nd epoch of TMRNet,
the training loss is almost zero to supervise the model. After that, the test performance would further drop, due to severe over-fitting of the training set.
Also, I tried the provided pretrained pth, with the locally produced LFB, but only got accuracy of 75.13%. More metrics are as follows:
For the first question regrading the large gap between the training and test performance: We find that overfitting the training set can improve the testing performance based on observation. When the trainig loss do not decrease any more, we will reduce the learning rate for further training. And during the second stage to train the TMRNet, the model will converge quickly (usually 1-2 epoch, or even within 1 epoch), you have to make sure the learning rate is small enough. Otherwise it will results in poor performance.
For the second question about using the provided pretrained pth can not reproduce the results: there are few things you need to pay attention to, 1. Cut off the black margin of video frames can improve performance. 2. LFB should also be generated from the best pretrained model. (I also upload the model for reference https://www.dropbox.com/s/sa8b2mv7x0eww0z/pretrained_LFB_model.pth?dl=0) 3. Evaluate the results using the official matlab script as mentioned in the readme.
Thanks for sharing this project! I followed the instruction to implement TMRNet in two-step (first to train the single resnet_lstm for LFB, the second to train the whole TMRNet), as well as the pre-processing. In training, there is always a large gap between the training and test performance. For example, in the 2nd epoch of TMRNet,
the training loss is almost zero to supervise the model. After that, the test performance would further drop, due to severe over-fitting of the training set.
Also, I tried the provided pretrained pth, with the locally produced LFB, but only got accuracy of 75.13%. More metrics are as follows:
Test Loss: 1.0964 | Acc: 75.73 F1: 61.60 Recall: 63.04 Prec: 66.74 Jaccard: 60.94.
Are there any tricks to achieve the reported test performance (e.g., 89.2% accuracy for ResNet-based TMRNet)? Looking forward to your reply! Thanks
The text was updated successfully, but these errors were encountered: