Implement and reproduce results of the following papers:
- Momentum Contrast for Unsupervised Visual Representation Learning
- Improved Baselines with Momentum Contrastive Learning
- TensorFlow 1.14 or 1.15, built with XLA support
- Tensorpack ≥ 0.10.1
- Horovod ≥ 0.19 built with Gloo & NCCL support
- TensorFlow zmq_ops
- OpenCV
- the
taskset
command (from theutil-linux
package)
To run MoCo pre-training on a machine with 8 GPUs, use:
horovodrun -np 8 --output-filename moco.log python main_moco.py --data /path/to/imagenet
Add --v2
to train MoCov2,
which uses an extra MLP layer, extra augmentations, and cosine LR schedule.
To train a linear classifier using the pre-trained features, run:
./main_lincls.py --load /path/to/pretrained/checkpoint --data /path/to/imagenet
Instead of Linear Classification, a cheap but rough evaluation is to perform a feature-space kNN using the training set:
horovodrun -np 8 ./eval_knn.py --load /path/to/checkpoint --data /path/to/imagenet --top-k 200
Training was done in a machine with 8 V100s, >200GB RAM and 80 CPUs.
Following results are obtained after 200 epochs of pre-training (~53h) and 100 epochs of linear classifier tuning (~8h). KNN evaluation takes 10min per checkpoint.
linear cls. accuracy |
download (pretrained only) |
tensorboard | |
---|---|---|---|
MoCo v1 | 60.9% | ⬇️ | N/A |
MoCo v2 | 67.7% | ⬇️ | pretrain; finetune |
- Horovod with Gloo is recommended. Horovod with MPI is not tested and may crash due to how we use forking.
- If using TensorFlow without XLA support, you can modify
main_*.py
to replacexla.compile
by a naive forward. - Official PyTorch code is at facebookresearch/moco.