You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Hi @tmp12316 , I tested with the correct config file without multi-scale trick and got 45.6 AP@50 on Clipart1k using batch size 4. Will update the experiment with more batch size once I have enough local training resources.
Thanks for your work, but I recently noticed another question about the input image scale.
As far as I know, the input min scale should be 600 for FRCNN-based DAOD frameworks, as shown in https://github.com/krumo/Domain-Adaptive-Faster-RCNN-PyTorch/blob/df0488405a7679552bc2504b973e29178c141b26/configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_C4_cityscapes_to_foggy_cityscapes.yaml#L24
But It seems that AT uses multi-scale training in all configs?
adaptive_teacher/configs/Base-RCNN-C4.yaml
Line 17 in cba3c59
The text was updated successfully, but these errors were encountered: