This repository contains the PyTorch implementation of the CShift model introduced in our paper:
- Paper: Self-Supervised Learning in Multi-Task Graphs through Iterative Consensus Shift
- Authors: Emanuela Haller*, Elena Burceanu* and Marius Leordeanu (*equal contribution)
- Venue: BMVC 2021
- Video, Presentation
We provide code for reproducing the main results in our paper on 2 datasets (Replica and Hypersim). Our self-supervised approach exploits the consensus in a multi-task graph to adapt to newly seen data distributions. As a starting point, we use off-the-shelf expert models trained out of distribution for various visual tasks (we provide code for 13 such experts). The graph will self-adapt to the target distribution, overcoming the performance of the initial experts.
Create a conda environment with the following configuration (it includes dependencies for all the experts):
conda env create -f environment_chift.yml
cd experts/models; bash get_models.sh
We provide code for preprocessing 13 experts, on 2 datasets: Replica and Hypersim.
Generate and preprocess the Replica dataset:
- download metadata from Replica repo, see
preprocess_dbs/replica_generator/generator.py
from our repo cd preprocess_dbs/replica_generator; python generator.py
cd preprocess_dbs; bash preprocess_replica.sh
- note: you need to update paths in both
preprocess_dbs/replica_generator/generator.py
andpreprocess_dbs/main_replica.py
Generate and preprocess the Hypersim dataset:
- download dataset from Hypersim repo, see
preprocess_dbs/hypersim/dataset_download_images.py
from our repo - our dataset splits are available in
preprocess_dbs/hypersim/cshift_hypersim_splits.csv
cd preprocess_dbs; bash preprocess_hypersim.sh
- note: you need to update paths in
preprocess_dbs/main_hypersim.py
Once we have the datasets generated and preprocessed, we train each edge based on the current iteration pseudo-labels. Each line in the following script trains trains all graph's edges that reach a certain node (eg. depth_xtc)
bash script_train_all.sh
- note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.
The following script evaluates each edge and the CShift selection based ensemble.
bash script_eval_all.sh
- note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.
For storing the current CShift predictions (for using them further as pseudo-labels in the next iteration):
bash script_store_all.sh
- note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.
For adding a new iteration, repeat training + store steps, using as pseudo-labels the outputs saved in the previous iteration, in Step 4. Store.
Task | Expert model | Training dataset | Path in repo |
---|---|---|---|
RGB | - | - | RGB loading code |
Halftone | halftone-python | - | Halftone loading code |
Grayscale | - | - | Gray loading code |
HSV | rgb2hsv | - | HSV loading code |
Depth | XTC | Taskonomy | Depth loading code |
Surface normals | XTC | Taskonomy | Normals loading code |
Small Low-level edges | Sobel sigma 0.1 | - | Small Edges loading code |
Medium Low-level edges | Sobel sigma 1 | - | Medium Edges loading code |
Large Low-level edges | Sobel sigma 4 | - | Large Edges loading code |
High-level edges | DexiNed | BIPED | High-level edges loading code |
Superpixel | SpixelNet | SceneFlow, BSDS500 | Superpixel loading code |
Cartoon | Cartoonize | FFHQ | Cartoon loading code |
Semantic segmentation | HRNetv2 | ADE20k | Semantic segmentation loading code |
If you find the code, models, or data useful, please cite this paper:
@article{haller2021unsupervised,
title={Self-Supervised Learning in Multi-Task Graphs through Iterative Consensus Shift},
author={Haller, Emanuela and Burceanu, Elena and Leordeanu, Marius},
journal={BMVC},
year={2021}
}