You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have a question about your method.
Figure2(b) mentioned that the pseudo-label generated by SAM is used for training to obtain a pre-training model, but the category of the pseudo-label generated by SAM is assigned according to the mask index.
In two different pictures, the mask index of a class of objects (such as people) with the same semantics may be completely different. I don’t know how you use this data set for supervised training?
Have you made any modifications to the loss?
The text was updated successfully, but these errors were encountered:
Hi, the dataset is used for pretraining. We train the pretrained model by supervised training on the dataset(with your own model).
The issue you mentioned indeed exists, and there were indeed numerous such labeling errors in the initial publication of this article. Therefore, we acknowledged in the paper that improvements can be made regarding this issue in future work. Currently, I have identified a more appropriate approach for constructing labels, which will be presented in subsequent articles. Of course, you can also design your own method for optimizing the labels. However, it is worth noting that the approach I used at that time already yielded better results on SODA than the ImageNet pre-trained models, which was a pleasantly surprising outcome.
Hi, I have a question about your method.
Figure2(b) mentioned that the pseudo-label generated by SAM is used for training to obtain a pre-training model, but the category of the pseudo-label generated by SAM is assigned according to the mask index.
In two different pictures, the mask index of a class of objects (such as people) with the same semantics may be completely different. I don’t know how you use this data set for supervised training?
Have you made any modifications to the loss?
The text was updated successfully, but these errors were encountered: