Skip to content

PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.

Notifications You must be signed in to change notification settings

AlanChou/Super-Loss

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Super Loss

This is the unofficial PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.
https://proceedings.neurips.cc/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-Paper.pdf

Overview

THE CURREENT CODE IS UNDER MAINTAINANCE AND IT IS NOT CORRECT. The labertw function should be implemented with PyTorch instead of using the scipy library as mentioned in AlanChou/Truncated-Loss#3 (comment). I'll try to fix this when I'm available.

This is a simple implementation of the paper where only image classification task on CIFAR is implemented. Note that in order to save time, the codebase is mostly based on my previous implementation of Truncated Loss. As a result, some settings might be different from the paper and the file SuperLoss.py is "hard coded" which is not very ideal if one want to plug other loss functions (e.g. MSE or Focal Loss) into Super Loss.

Dependencies

This code is based on Python 3.5, with the main dependencies being PyTorch==1.2.0 torchvision==0.4.0 Additional dependencies for running experiments are: numpy, argparse, os, csv, sys, PIL, scipy

Run the code with the following example commands:

Uniform Noise with noise rate 0.4 on CIFAR-10

$ CUDA_VISIBLE_DEVICES=0 python3 main.py --dataset cifar10 --noise_type symmetric --noise_rate 0.4 --schedule 40 70 --epochs 100

About

PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages