conda create -n 312 -c conda-forge python=3.12
conda activate 312
# conda install pytorch::pytorch torchvision torchaudio -c pytorch # Mac
# conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia # Cuda
# conda install pytorch torchvision torchaudio cpuonly -c pytorch # CPU
conda install tqdm tensorboard
pip install timed-decorator bs-scheduler
git clone https://github.com/ancestor-mithril/PyTorch-Pipeline.git
cd PyTorch-Pipeline
python main.py -h
Training on Cifar with ReduceLROnPlateau :
python main.py -device cuda:0 -lr 0.001 -bs 16 -epochs 200 -dataset cifar10 -data_path ../data -scheduler ReduceLROnPlateau -scheduler_params "{'mode':'min', 'factor':0.5}" -model preresnet18_c10 -fill 0.5 --cutout --autoaug --tta --half
Using a BS Scheduler instead:
python main.py -device cuda:0 -lr 0.001 -bs 16 -epochs 200 -dataset cifar10 -data_path ../data -scheduler IncreaseBSOnPlateau -scheduler_params "{'mode':'min', 'factor':2.0, 'max_batch_size': 1000}" -model preresnet18_c10 -fill 0.5 --cutout --autoaug --tta --half
Modify experiment_runner.py and then run it.
-device
- can be
cpu
,cuda:0
,mps
- default:
cuda:0
- can be
-learning rate
- can be any float
- default:
0.001
-bs
- can be any uint (limited by RAM/VRAM)
- default:
64
-epochs
- the maximum number of epochs for training
- can be any uint
- default:
200
-es_patience
- if no minimization in train loss is registered for
es_patience
epochs, the training is stopped - can be any uint
- default:
20
- if no minimization in train loss is registered for
-dataset
- implemented datasets:
cifar10
,cifar100
,cifar100noisy
,FashionMNIST
,MNIST
,DirtyMNIST
- for other datasets: please implement them
- default:
cifar10
- implemented datasets:
-data_path
- path for dataset files
- default:
../data
-scheduler
- implemented schedulers:
IncreaseBSOnPlateau
,ReduceLROnPlateau
,StepBS
,StepLR
,ExponentialBS
,ExponentialLR
,PolynomialBS
,PolynomialLR
,CosineAnnealingBS
,CosineAnnealingLR
,CosineAnnealingBSWithWarmRestarts
,CosineAnnealingWarmRestarts
,CyclicBS
,CyclicLR
,OneCycleBS
,OneCycleLR
,LinearLR
,LinearBS
- for other schedulers: please implement them
- default:
None
- implemented schedulers:
-scheduler_params
- a json string which contains all parameters that need to be passed to the schedulers
- default:
"{}"
-model
- implemented models:
preresnet18_c10
,lenet_mnist
,cnn_mnist
- for other models: please implement them
- default:
preresnet18_c10
- implemented models:
-criterion
- implemented criterions:
crossentropy
- for other criterions: please implement them
- default:
crossentropy
- implemented criterions:
-reduction
- implemented reductions:
mean
,iqr
,stdmean
- for other reductions: please implement them
- default:
mean
- implemented reductions:
-loss_scaling
- implemented loss scaling:
normal-scaling
- for others: please implement them
- default:
None
- implemented loss scaling:
-fill
- fill value for transformations
- can be any float between 0 and 1 (recommended: 0.5)
- default:
None
(None
usually means 0, this depends on transformation)
-num_threads
- default number of threads to be used by PyTorch on CPU
- can be any non-zero uint
- default:
None
(None
means half of the available threads)
-seed
- seed for numpy, torch, cuda and python random generators
- can be any uint
- default:
3
--cutout
- if added, uses CutOut (must be implemented first in transformations, currently implemented for
cifar10
,cifar100
,cifar100noisy
andFashionMNIST
)
- if added, uses CutOut (must be implemented first in transformations, currently implemented for
--autoaug
- if added, uses AutoAug (must be implemented first in transformations, currently implemented for
cifar10
,cifar100
,cifar100noisy
andFashionMNIST
)
- if added, uses AutoAug (must be implemented first in transformations, currently implemented for
--tta
- if added, does TTA for validation, using original and horizontally flipped tensor (random horizontal flipped should be used during training)
--half
- if added, uses torch amp (faster, less VRAM)
--disable_progress_bar
- if added, disables tqdm bar which tracks training progression
--verbose
- if added, prints stuff to console
--stderr
- if added, prints to stderr instead of stdout; works great in combination with silent experiment_runner