-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The result of the inference stage is wrong #2
Comments
I encounter the same problem. |
Hi, the label 0 not represents to the background, due to the dataloader has made "reduce_zero_label=True". I'm not sure this may be due to data processing or the environment. I will try to find out why.
When I tried to train the model following the readme, the losses almost remained the same. Did you meet this problem before? Thank you! |
I get test result much better than above, but it is still lower than the data in the paper. Besides, I find that testing after training for 1.8K iterations on my machine shows better performance than 2K. Maybe the reason is the difference of environment? Here is the inference results after training for 1.8K iterations:
And here is the inference results after training for 2K iterations:
|
I am getting poor results on COCO with just inferencing the models shared in the repository: I want to point out that these are evaluated on the Panoptic COCO dataset and not COCOstuff, is it possible these results are actually fine for a slightly different distribution? Or is something else wrong?
command is:
|
Excuse me, after training 20,000 generations on the voc enhanced dataset, the indicators on the visible and invisible classes in the inference stage are close to 0. What is the reason for this phenomenon?
The text was updated successfully, but these errors were encountered: