You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your work is wonderful and has been very rewarding for me. But I have a slight doubt as to how your method achieves amazing accuracy when learning the underlying task on three datasets. In particular, on ImageNet-Subset, the accuracy of the initial 50 classes can be close to 100% (from Figure 7 it is estimated to be 98%).
Look forward to and thank you for your reply.
The text was updated successfully, but these errors were encountered:
The accuracy on the base classes is also higher on CIFAR-100 (5/10/20 phases, Figure 5). This is because:
Training on the base classes is an easier task compared to training on the full classes. There are only ~half of the total classes to distinguish.
Other methods all simply used the out-of-the-box classification result from the network, but here ours involves many "post-processings" like Voronoi subdivision, test-time assignment, etc.
Hi Chunwei,
Your work is wonderful and has been very rewarding for me. But I have a slight doubt as to how your method achieves amazing accuracy when learning the underlying task on three datasets. In particular, on ImageNet-Subset, the accuracy of the initial 50 classes can be close to 100% (from Figure 7 it is estimated to be 98%).
Look forward to and thank you for your reply.
The text was updated successfully, but these errors were encountered: