Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotation file the paper used #5

Closed
Darren-pfchen opened this issue Sep 23, 2022 · 4 comments
Closed

Annotation file the paper used #5

Darren-pfchen opened this issue Sep 23, 2022 · 4 comments

Comments

@Darren-pfchen
Copy link

Dears,
I am very interested in your paper and I try to run your code. I use the annotation generated by your code with 0.2 and 0.4 noise rate. But I found the performance trained by the generated annotation is lower than the paper (0.4: 16.4 vs 18.6, 02: 28.1 vs 32.1) , I guess this may because the annotation file is different (it is newly generated) or there are some other settings. Can you release the annotation file the paper used for comparison. Thanks!

@cxliu0
Copy link
Owner

cxliu0 commented Sep 23, 2022

Thanks for your interest in this work. The annotation files can be downloaded from google drive.

BTW, for 20% noise, you may need to use a smaller oamil_lambda, e.g., using 0.01 instead of 0.1 (see implementation section in the paper).

@Darren-pfchen
Copy link
Author

Thanks for your reply!With your new setting, I obtain the performance of 0.2 noise rate. However, the performance of 0.4 noise rate is still around 16.4 AP (try many times). I am so willing to receive your help and thank you very much.
In addition, could you please provide the config and annotation file (the format the paper used) of GWHD dataset, I think the paper is very interesting and I wanna do some extra experiments. Thanks again!

@cxliu0
Copy link
Owner

cxliu0 commented Sep 28, 2022

I have run the code and the results seem fine, so I am not sure what leads to the inferior results (16.4 AP). Could you share your training log file? It may provide hints about the problem.

For GWHD dataset,

  • The model config is the same as faster_rcnn_r50_fpn_coco_oamil.py, and the dataset config is basically the same as voc07_oamil.py, except img_scale, which is set to (1024, 1024). The reason is that the image resolution of GWHD dataset is 1024x1024.
  • We use the same annotation file provided by the dataset (.csv format). It is available at zenodo. Note that the dataset only contains clean (corrected) annotations.
  • For the noisy training annotations, I am not sure whether is it appropriate to share them, because the authors of GWHD dataset no longer provide access to it. Optionally, you may download the initial training annotations from Kaggle. But we should note that the naming of images is completely different between Kaggle (initial version) and zenodo (corrected version). We have used a simple image matching algorithm to obtain the corresponding ``noisy'' training data.

@cxliu0
Copy link
Owner

cxliu0 commented Oct 3, 2022

We have updated the model configuration for COCO 40% noise. Now it can achieve similar performance as reported in the paper (around 18.6 AP).

@cxliu0 cxliu0 closed this as completed Aug 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants