You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I observe that MIC primarily relies on randomly generated masks during training. The training process of MIC involves not only the random generation of masks but also the random mixing of images based on their respective classes. This raises the question of whether the high level of randomness in MIC's training leads to unstable segmentation results. Specifically, when I employ MIC to train models using Cityscapes as the source dataset and Dark zurich as the target dataset, the segmentation results varied between 56mIoU and 59mIoU, falling short of the 60.3mIoU reported in the original paper. Is this level of instability in segmentation results considered normal?
The text was updated successfully, but these errors were encountered:
I observe that MIC primarily relies on randomly generated masks during training. The training process of MIC involves not only the random generation of masks but also the random mixing of images based on their respective classes. This raises the question of whether the high level of randomness in MIC's training leads to unstable segmentation results. Specifically, when I employ MIC to train models using Cityscapes as the source dataset and Dark zurich as the target dataset, the segmentation results varied between 56mIoU and 59mIoU, falling short of the 60.3mIoU reported in the original paper. Is this level of instability in segmentation results considered normal?
The text was updated successfully, but these errors were encountered: