You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your outstanding performance has surprised us, and I have tried to apply the method in my work. In a few shot classification experiment on a single dataset MiniImageNet (without cross-domain data from multiple datasets), fine-tuning made a very small difference, even failing to bring a 0.1% improvement. Was the fine-tuning effect significant in your experiments? What do you think might be the cause of this problem?
The text was updated successfully, but these errors were encountered:
I guess fine-tuning work better when the domain gap is not too small. On MiniImageNet, the pre-trained DINO just work amazing as this is now well understood that foundation model solves many classification problems. While scaling up foundation model solves more and more problems, certain domains (e.g. 3D vision) lack good foundation model where meta-learning and fine-tuning are still of good practice.
Your outstanding performance has surprised us, and I have tried to apply the method in my work. In a few shot classification experiment on a single dataset MiniImageNet (without cross-domain data from multiple datasets), fine-tuning made a very small difference, even failing to bring a 0.1% improvement. Was the fine-tuning effect significant in your experiments? What do you think might be the cause of this problem?
The text was updated successfully, but these errors were encountered: