Skip to content

Latest commit

 

History

History
44 lines (36 loc) · 1.52 KB

README.org

File metadata and controls

44 lines (36 loc) · 1.52 KB

PyTorch Implementation: Optimizing the Latent Space of Generative Networks

My PyTorch implementation of the paper “Optimizing the Latent Space of Generative Networks” by Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam. It is a very interesting read and good to understand!

My personal goal with this project was to practice reimplementing a paper, in order to gain more experience. This paper was not completely trivial, but at the same time also very non-standard.

Setup and Installation

Install the dependencies, ideally in a virtual environment:

pip install -r requirements.txt

Install PyTorch as described on the website, depending on your python version, CUDA, etc.

Train the model

python main.py

You can also see all available options with, to e.g. decide on the dataset, the path where the dataset is stored, or the training parameters.

python main.py -h

TensorboardX

To-Do

  • [ ] explanation for the tensorboard usage
  • [X] logging of the model parameters for nice histograms
  • [X] for visual testing: evaluate always the same images to send to tensorboard
  • [X] rename model
  • [ ] Describe the `plac` parameters well
  • [X] Cleanup the laploss and pca parts
  • [X] Store the PCA part locally to save some time
  • [X] Use ignite

Related:

Another implementation I found on the topic: https://github.com/tneumann/minimal_glo