Skip to content

bit-ml/cshift

Repository files navigation

CShift

This repository contains the PyTorch implementation of the CShift model introduced in our paper:

Overview

We provide code for reproducing the main results in our paper on 2 datasets (Replica and Hypersim). Our self-supervised approach exploits the consensus in a multi-task graph to adapt to newly seen data distributions. As a starting point, we use off-the-shelf expert models trained out of distribution for various visual tasks (we provide code for 13 such experts). The graph will self-adapt to the target distribution, overcoming the performance of the initial experts.

Installation

Create a conda environment with the following configuration (it includes dependencies for all the experts):

  • conda env create -f environment_chift.yml

CShift steps

Step 0. Download expert models

cd experts/models; bash get_models.sh

Step 1. Preprocess

We provide code for preprocessing 13 experts, on 2 datasets: Replica and Hypersim.

Replica

Generate and preprocess the Replica dataset:

  • download metadata from Replica repo, see preprocess_dbs/replica_generator/generator.py from our repo
  • cd preprocess_dbs/replica_generator; python generator.py
  • cd preprocess_dbs; bash preprocess_replica.sh
  • note: you need to update paths in both preprocess_dbs/replica_generator/generator.py and preprocess_dbs/main_replica.py

Hypersim

Generate and preprocess the Hypersim dataset:

  • download dataset from Hypersim repo, see preprocess_dbs/hypersim/dataset_download_images.py from our repo
  • our dataset splits are available in preprocess_dbs/hypersim/cshift_hypersim_splits.csv
  • cd preprocess_dbs; bash preprocess_hypersim.sh
  • note: you need to update paths in preprocess_dbs/main_hypersim.py

Step 2. Train

Once we have the datasets generated and preprocessed, we train each edge based on the current iteration pseudo-labels. Each line in the following script trains trains all graph's edges that reach a certain node (eg. depth_xtc)

  • bash script_train_all.sh
  • note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.

Step 3. Eval [optional]

The following script evaluates each edge and the CShift selection based ensemble.

  • bash script_eval_all.sh
  • note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.

Step 4. Store

For storing the current CShift predictions (for using them further as pseudo-labels in the next iteration):

  • bash script_store_all.sh
  • note: you need to update [PathsIter] and [Edge Models] load_path from the configuration file with your own paths.

Step 5: Goto Step 2. Train

For adding a new iteration, repeat training + store steps, using as pseudo-labels the outputs saved in the previous iteration, in Step 4. Store.

Expert Models

Task Expert model Training dataset Path in repo
RGB - - RGB loading code
Halftone halftone-python - Halftone loading code
Grayscale - - Gray loading code
HSV rgb2hsv - HSV loading code
Depth XTC Taskonomy Depth loading code
Surface normals XTC Taskonomy Normals loading code
Small Low-level edges Sobel sigma 0.1 - Small Edges loading code
Medium Low-level edges Sobel sigma 1 - Medium Edges loading code
Large Low-level edges Sobel sigma 4 - Large Edges loading code
High-level edges DexiNed BIPED High-level edges loading code
Superpixel SpixelNet SceneFlow, BSDS500 Superpixel loading code
Cartoon Cartoonize FFHQ Cartoon loading code
Semantic segmentation HRNetv2 ADE20k Semantic segmentation loading code

Citation

If you find the code, models, or data useful, please cite this paper:

@article{haller2021unsupervised,
  title={Self-Supervised Learning in Multi-Task Graphs through Iterative Consensus Shift},
  author={Haller, Emanuela and Burceanu, Elena and Leordeanu, Marius},
  journal={BMVC},
  year={2021}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published