StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea.
Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U).
- StudioGAN paper is accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023.
- We provide all checkpoints we used: Please visit Hugging Face Hub.
- Our new paper "StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis" is made public on arXiv.
- StudioGAN provides implementations of 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 3 differentiable augmentations, 8 evaluation metrics, and 5 evaluation backbones.
- StudioGAN supports both clean and architecture-friendly metrics (IS, FID, PRDC, IFID) with a comprehensive benchmark.
- StudioGAN provides wandb logs and pre-trained models (will be ready soon).
- We checked the reproducibility of implemented GANs.
- We provide Baby, Papa, and Grandpa ImageNet datasets where images are processed using the anti-aliasing and high-quality resizer.
- StudioGAN provides a dedicatedly established Benchmark on standard datasets (CIFAR10, ImageNet, AFHQv2, and FFHQ).
- StudioGAN supports InceptionV3, ResNet50, SwAV, DINO, and Swin Transformer backbones for GAN evaluation.
- Coverage: StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives.
- Flexibility: Each modularized option is managed through a configuration system that works through a YAML file, so users can train a large combination of GANs by mix-matching distinct options.
- Reproducibility: With StudioGAN, users can compare and debug various GANs with the unified computing environment without concerning about hidden details and tricks.
- Plentifulness: StudioGAN provides a large collection of pre-trained GAN models, training logs, and evaluation results.
- Versatility: StudioGAN supports 5 types of acceleration methods with synchronized batch normalization for training: a single GPU training, data-parallel training (DP), distributed data-parallel training (DDP), multi-node distributed data-parallel training (MDDP), and mixed-precision training.
Method | Venue | Architecture | GC | DC | Loss | EMA |
---|---|---|---|---|---|---|
DCGAN | arXiv'15 | DCGAN/ResNetGAN1 | N/A | N/A | Vanilla | False |
InfoGAN | NIPS'16 | DCGAN/ResNetGAN1 | N/A | N/A | Vanilla | False |
LSGAN | ICCV'17 | DCGAN/ResNetGAN1 | N/A | N/A | Least Sqaure | False |
GGAN | arXiv'17 | DCGAN/ResNetGAN1 | N/A | N/A | Hinge | False |
WGAN-WC | ICLR'17 | ResNetGAN | N/A | N/A | Wasserstein | False |
WGAN-GP | NIPS'17 | ResNetGAN | N/A | N/A | Wasserstein | False |
WGAN-DRA | arXiv'17 | ResNetGAN | N/A | N/A | Wasserstein | False |
ACGAN-Mod2 | - | ResNetGAN | cBN | AC | Hinge | False |
PDGAN | ICLR'18 | ResNetGAN | cBN | PD | Hinge | False |
SNGAN | ICLR'18 | ResNetGAN | cBN | PD | Hinge | False |
SAGAN | ICML'19 | ResNetGAN | cBN | PD | Hinge | False |
TACGAN | Neurips'19 | BigGAN | cBN | TAC | Hinge | True |
LGAN | ICML'19 | ResNetGAN | N/A | N/A | Vanilla | False |
Unconditional BigGAN | ICLR'19 | BigGAN | N/A | N/A | Hinge | True |
BigGAN | ICLR'19 | BigGAN | cBN | PD | Hinge | True |
BigGAN-Deep-CompareGAN | ICLR'19 | BigGAN-Deep CompareGAN | cBN | PD | Hinge | True |
BigGAN-Deep-StudioGAN | - | BigGAN-Deep StudioGAN | cBN | PD | Hinge | True |
StyleGAN2 | CVPR' 20 | StyleGAN2 | cAdaIN | SPD | Logistic | True |
CRGAN | ICLR'20 | BigGAN | cBN | PD | Hinge | True |
ICRGAN | AAAI'21 | BigGAN | cBN | PD | Hinge | True |
LOGAN | arXiv'19 | ResNetGAN | cBN | PD | Hinge | True |
ContraGAN | Neurips'20 | BigGAN | cBN | 2C | Hinge | True |
MHGAN | WACV'21 | BigGAN | cBN | MH | MH | True |
BigGAN + DiffAugment | Neurips'20 | BigGAN | cBN | PD | Hinge | True |
StyleGAN2 + ADA | Neurips'20 | StyleGAN2 | cAdaIN | SPD | Logistic | True |
BigGAN + LeCam | CVPR'2021 | BigGAN | cBN | PD | Hinge | True |
ReACGAN | Neurips'21 | BigGAN | cBN | D2D-CE | Hinge | True |
StyleGAN2 + APA | Neurips'21 | StyleGAN2 | cAdaIN | SPD | Logistic | True |
StyleGAN3-t | Neurips'21 | StyleGAN3 | cAaIN | SPD | Logistic | True |
StyleGAN3-r | Neurips'21 | StyleGAN3 | cAaIN | SPD | Logistic | True |
ADCGAN | ICML'22 | BigGAN | cBN | ADC | Hinge | True |
GC/DC indicates the way how we inject label information to the Generator or Discriminator.
EMA: Exponential Moving Average update to the generator. cBN : conditional Batch Normalization. cAdaIN: Conditional version of Adaptive Instance Normalization. AC : Auxiliary Classifier. PD : Projection Discriminator. TAC: Twin Auxiliary Classifier. SPD : Modified PD for StyleGAN. 2C : Conditional Contrastive loss. MH : Multi-Hinge loss. ADC : Auxiliary Discriminative Classifier. D2D-CE : Data-to-Data Cross-Entropy.
Method | Venue | Architecture |
---|---|---|
Inception Score (IS) | Neurips'16 | InceptionV3 |
Frechet Inception Distance (FID) | Neurips'17 | InceptionV3 |
Improved Precision & Recall | Neurips'19 | InceptionV3 |
Classifier Accuracy Score (CAS) | Neurips'19 | InceptionV3 |
Density & Coverage | ICML'20 | InceptionV3 |
Intra-class FID | - | InceptionV3 |
SwAV FID | ICLR'21 | SwAV |
Clean metrics (IS, FID, PRDC) | CVPR'22 | InceptionV3 |
Architecture-friendly metrics (IS, FID, PRDC) | arXiv'22 | Not limited to InceptionV3 |
Method | Venue | Target Architecture |
---|---|---|
FreezeD | CVPRW'20 | Except for StyleGAN2 |
Top-K Training | Neurips'2020 | - |
DDLS | Neurips'2020 | - |
SeFa | CVPR'2021 | BigGAN |
We check the reproducibility of GANs implemented in StudioGAN by comparing IS and FID with the original papers. We identify our platform successfully reproduces most of representative GANs except for PD-GAN, ACGAN, LOGAN, SAGAN, and BigGAN-Deep. FQ means Flickr-Faces-HQ Dataset (FFHQ). The resolutions of ImageNet, AFHQv2, and FQ datasets are 128, 512, and 1024, respectively.
First, install PyTorch meeting your environment (at least 1.7):
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
Then, use the following command to install the rest of the libraries:
pip install tqdm ninja h5py kornia matplotlib pandas sklearn scipy seaborn wandb PyYaml click requests pyspng imageio-ffmpeg timm
With docker, you can use (Updated 14/DEC/2022):
docker pull alex4727/experiment:pytorch113_cuda116
This is our command to make a container named "StudioGAN".
docker run -it --gpus all --shm-size 128g --name StudioGAN -v /path_to_your_folders:/root/code --workdir /root/code alex4727/experiment:pytorch113_cuda116 /bin/zsh
If your nvidia driver version doesn't satisfy requirements, you can try adding below to above command.
--env NVIDIA_DISABLE_REQUIRE=true
-
CIFAR10/CIFAR100: StudioGAN will automatically download the dataset once you execute
main.py
. -
Tiny ImageNet, ImageNet, or a custom dataset:
- download Tiny ImageNet, Baby ImageNet, Papa ImageNet, Grandpa ImageNet, ImageNet. Prepare your own dataset.
- make the folder structure of the dataset as follows:
data
└── ImageNet, Tiny_ImageNet, Baby ImageNet, Papa ImageNet, or Grandpa ImageNet
├── train
│ ├── cls0
│ │ ├── train0.png
│ │ ├── train1.png
│ │ └── ...
│ ├── cls1
│ └── ...
└── valid
├── cls0
│ ├── valid0.png
│ ├── valid1.png
│ └── ...
├── cls1
└── ...
Before starting, users should login wandb using their personal API key.
wandb login PERSONAL_API_KEY
From release 0.3.0, you can now define which evaluation metrics to use through -metrics
option. Not specifying option defaults to calculating FID only.
i.e. -metrics is fid
calculates only IS and FID and -metrics none
skips evaluation.
- Train (
-t
) and evaluate IS, FID, Prc, Rec, Dns, Cvg (-metrics is fid prdc
) of the model defined inCONFIG_PATH
using GPU0
.
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -metrics is fid prdc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
- Preprocess images for training and evaluation using PIL.LANCZOS filter (
--pre_resizer lanczos
). Then, train (-t
) and evaluate friendly-IS, friendly-FID, friendly-Prc, friendly-Rec, friendly-Dns, friendly-Cvg (-metrics is fid prdc --post_resizer clean
) of the model defined inCONFIG_PATH
using GPU0
.
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -metrics is fid prdc --pre_resizer lanczos --post_resizer clean -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
- Train (
-t
) and evaluate FID of the model defined inCONFIG_PATH
throughDataParallel
using GPUs(0, 1, 2, 3)
. Evaluation of FID does not require (-metrics
) argument!
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
- Train (
-t
) and skip evaluation (-metrics none
) of the model defined inCONFIG_PATH
throughDistributedDataParallel
using GPUs(0, 1, 2, 3)
,Synchronized batch norm
, andMixed precision
.
export MASTER_ADDR="localhost"
export MASTER_PORT=2222
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -metrics none -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -DDP -sync_bn -mpc
Try python3 src/main.py
to see available options.
-
Load All Data in Main Memory (
-hdf5 -l
)CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -t -hdf5 -l -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
-
DistributedDataParallel (Please refer to Here) (
-DDP
)### NODE_0, 4_GPUs, All ports are open to NODE_1 ~/code>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0 ~/code>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0 ~/code/PyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -DDP -tn 2 -cn 0 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
### NODE_1, 4_GPUs, All ports are open to NODE_0 ~/code>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0 ~/code>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0 ~/code/PyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -DDP -tn 2 -cn 1 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
-
Mixed Precision Training (
-mpc
)CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -t -mpc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
-
Change Batch Normalization Statistics
# Synchronized batchNorm (-sync_bn) CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -t -sync_bn -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH # Standing statistics (-std_stat, -std_max, -std_step) CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -std_stat -std_max STD_MAX -std_step STD_STEP -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH # Batch statistics (-batch_stat) CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -batch_stat -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
-
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py --truncation_factor TRUNCATION_FACTOR -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
-
DDLS (
-lgv -lgv_rate -lgv_std -lgv_decay -lgv_decay_steps -lgv_steps
)CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -lgv -lgv_rate LGV_RATE -lgv_std LGV_STD -lgv_decay LGV_DECAY -lgv_decay_steps LGV_DECAY_STEPS -lgv_steps LGV_STEPS -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
-
Freeze Discriminator (
-freezeD
)CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -t --freezeD FREEZED -ckpt SOURCE_CKPT -cfg TARGET_CONFIG_PATH -data DATA_PATH -save SAVE_PATH
StudioGAN supports Image visualization, K-nearest neighbor analysis, Linear interpolation, Frequency analysis, TSNE analysis, and Semantic factorization
. All results will be saved in SAVE_DIR/figures/RUN_NAME/*.png
.
- Image Visualization
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -v -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR
- K-Nearest Neighbor Analysis (we have fixed K=7, the images in the first column are generated images.)
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -knn -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
- Linear Interpolation (applicable only to conditional Big ResNet models)
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -itp -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR
- Frequency Analysis
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -fa -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
- TSNE Analysis
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -tsne -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
- Semantic Factorization for BigGAN
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -sefa -sefa_axis SEFA_AXIS -sefa_max SEFA_MAX -cfg CONFIG_PATH -ckpt CKPT -save SAVE_PATH
StudioGAN supports the training of 30 representative GANs from DCGAN to StyleGAN3-r.
We used different scripts depending on the dataset and model, and it is as follows:
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -hdf5 -l -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer "friendly" --eval_backbone "InceptionV3_tf"
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -hdf5 -l -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer "friendly" --eval_backbone "InceptionV3_tf"
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -hdf5 -l -sync_bn -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"
export MASTER_ADDR="localhost"
export MASTER_PORT=8888
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"
export MASTER_ADDR="localhost"
export MASTER_PORT=8888
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 src/main.py -t -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"
StudioGAN supports Inception Score, Frechet Inception Distance, Improved Precision and Recall, Density and Coverage, Intra-Class FID, Classifier Accuracy Score. Users can get Intra-Class FID, Classifier Accuracy Score
scores using -iFID, -GAN_train, and -GAN_test
options, respectively.
Users can change the evaluation backbone from InceptionV3 to ResNet50, SwAV, DINO, or Swin Transformer using --eval_backbone ResNet50_torch, SwAV_torch, DINO_torch, or Swin-T_torch
option.
In addition, Users can calculate metrics with clean- or architecture-friendly resizer using --post_resizer clean or friendly
option.
Inception Score (IS) is a metric to measure how much GAN generates high-fidelity and diverse images. Calculating IS requires the pre-trained Inception-V3 network. Note that we do not split a dataset into ten folds to calculate IS ten times.
FID is a widely used metric to evaluate the performance of a GAN model. Calculating FID requires the pre-trained Inception-V3 network, and modern approaches use Tensorflow-based FID. StudioGAN utilizes the PyTorch-based FID to test GAN models in the same PyTorch environment. We show that the PyTorch based FID implementation provides almost the same results with the TensorFlow implementation (See Appendix F of ContraGAN paper).
Improved precision and recall are developed to make up for the shortcomings of the precision and recall. Like IS, FID, calculating improved precision and recall requires the pre-trained Inception-V3 model. StudioGAN uses the PyTorch implementation provided by developers of density and coverage scores.
Density and coverage metrics can estimate the fidelity and diversity of generated images using the pre-trained Inception-V3 model. The metrics are known to be robust to outliers, and they can detect identical real and fake distributions. StudioGAN uses the authors' official PyTorch implementation, and StudioGAN follows the author's suggestion for hyperparameter selection.
※ We always welcome your contribution if you find any wrong implementation, bug, and misreported score.
We report the best IS, FID, Improved Precision & Recall, and Density & Coverage of GANs.
To download all checkpoints reported in StudioGAN, Please click here (Hugging face Hub).
You can evaluate the checkpoint by adding -ckpt CKPT_PATH
option with the corresponding configuration path -cfg CORRESPONDING_CONFIG_PATH
.
The resolutions of CIFAR10, Baby ImageNet, Papa ImageNet, Grandpa ImageNet, ImageNet, AFHQv2, and FQ are 32, 64, 64, 64, 128, 512, and 1024, respectively.
We use the same number of generated images as the training images for Frechet Inception Distance (FID), Precision, Recall, Density, and Coverage calculation. For the experiments using Baby/Papa/Grandpa ImageNet and ImageNet, we exceptionally use 50k fake images against a complete training set as real images.
All features and moments of reference datasets can be downloaded via features and moments.
The resolutions of ImageNet-128 and ImageNet 256 are 128 and 256, respectively.
All images used for Benchmark can be downloaded via One Drive (will be uploaded soon).
- Evaluate IS, FID, Prc, Rec, Dns, Cvg (
-metrics is fid prdc
) of image folders (already preprocessed) saved in DSET1 and DSET2 using GPUs(0,...,N)
.
CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --dset1 DSET1 --dset2 DSET2
- Evaluate IS, FID, Prc, Rec, Dns, Cvg (
-metrics is fid prdc
) of image folder saved in DSET2 using pre-computed features (--dset1_feats DSET1_FEATS
), moments of dset1 (--dset1_moments DSET1_MOMENTS
), and GPUs(0,...,N)
.
CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --dset1_feats DSET1_FEATS --dset1_moments DSET1_MOMENTS --dset2 DSET2
- Evaluate friendly-IS, friendly-FID, friendly-Prc, friendly-Rec, friendly-Dns, friendly-Cvg (
-metrics is fid prdc --post_resizer friendly
) of image folders saved in DSET1 and DSET2 throughDistributedDataParallel
using GPUs(0,...,N)
.
export MASTER_ADDR="localhost"
export MASTER_PORT=2222
CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --post_resizer friendly --dset1 DSET1 --dset2 DSET2 -DDP
[MIT license] Synchronized BatchNorm: https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
[MIT license] Self-Attention module: https://github.com/voletiv/self-attention-GAN-pytorch
[MIT license] DiffAugment: https://github.com/mit-han-lab/data-efficient-gans
[MIT_license] PyTorch Improved Precision and Recall: https://github.com/clovaai/generative-evaluation-prdc
[MIT_license] PyTorch Density and Coverage: https://github.com/clovaai/generative-evaluation-prdc
[MIT license] PyTorch clean-FID: https://github.com/GaParmar/clean-fid
[NVIDIA source code license] StyleGAN2: https://github.com/NVlabs/stylegan2
[NVIDIA source code license] Adaptive Discriminator Augmentation: https://github.com/NVlabs/stylegan2
[Apache License] Pytorch FID: https://github.com/mseitzer/pytorch-fid
PyTorch-StudioGAN is an open-source library under the MIT license (MIT). However, portions of the library are avaiiable under distinct license terms: StyleGAN2, StyleGAN2-ADA, and StyleGAN3 are licensed under NVIDIA source code license, and PyTorch-FID is licensed under Apache License.
StudioGAN is established for the following research projects. Please cite our work if you use StudioGAN.
@article{kang2023StudioGANpami,
title = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}},
author = {MinGuk Kang and Joonghyuk Shin and Jaesik Park},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year = {2023}
}
@inproceedings{kang2021ReACGAN,
title = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}},
author = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park},
journal = {Conference on Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
@inproceedings{kang2020ContraGAN,
title = {{ContraGAN: Contrastive Learning for Conditional Image Generation}},
author = {Minguk Kang and Jaesik Park},
journal = {Conference on Neural Information Processing Systems (NeurIPS)},
year = {2020}
}
[1] Experiments on Tiny ImageNet are conducted using the ResNet architecture instead of CNN.
[2] Our re-implementation of ACGAN (ICML'17) with slight modifications, which bring strong performance enhancement for the experiment using CIFAR10.