You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 1, 2024. It is now read-only.
I have tried to train RegNet variants with strong augmentations, such as AutoAugment or CutMix.
But the performance can not be improved with them.
For example, I have reproduced the paper's result for RegNet-Y-400M but I got around top1 accuracy 69% with CutMix, which is way below the vanilla RegNet-Y-400M.
I also tried to train the RegNet-Y with longer epochs, but failed to improve the result.
Do you have any experience for the strong data augmentations?
The text was updated successfully, but these errors were encountered:
Hi @ildoonet, thanks for sharing your findings. I don't have experience with strong data augmentations myself. However, we heard that several groups have managed to achieve strong results with RegNets using improved training settings (e.g. RegNetY-3.2GF with 82.0 top-1 accuracy reported here). So I believe that some combination of stronger settings should be able to give you better results. We also plan to release models with stronger settings in the future.
I have tried to train RegNet variants with strong augmentations, such as AutoAugment or CutMix.
But the performance can not be improved with them.
For example, I have reproduced the paper's result for RegNet-Y-400M but I got around top1 accuracy 69% with CutMix, which is way below the vanilla RegNet-Y-400M.
I also tried to train the RegNet-Y with longer epochs, but failed to improve the result.
Do you have any experience for the strong data augmentations?
The text was updated successfully, but these errors were encountered: