-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The result record in train.py has two items. #2
Comments
Hi, it's the best validation result. Fix the random seed, and run multiple times to report the average. |
Hi, thank you for answering my question! I find there are some missing files in ./models (alexnet.py). I run the 'train.py' on my server and the reproduction results cannot achieve as high as the paper released (PACS dataset on resnet18). If I miss something important? I have annotated some codes in class 'JigsawNewDataset' which are for jigsaw data generate process to run RSC model. My reproduction results is 79.45% for Sketch and 80.22% for Art Painting. |
I have uploaded my environment. Could you try to run the code in that environment? I attach some results I just got for Sketch and Art painting. sketch art_painting |
Thank you for sharing your results. I fix the environment the same with your 'environment.yml' and run the PACS dataset with resnet18 for 5 times. best val
best test
I find the results is not stable with random initialization. |
Hi, I run one experiment at a time in the server, and the best Val result usually appears at the last 5 epoch. It should be more stable and better. |
Got it! Thank you. |
In train.py(L145), I find the code also records the best performance of test domain in logger.save_best().
Could you point out which result the paper uses, is it the best performance of test domain (test_res.max()) or the best validation performance (test_res[idx_best])?
Thanks a lot!
The text was updated successfully, but these errors were encountered: