chainer implementation of pix2pix https://phillipi.github.io/pix2pix/
The Japanese readme can be found here.
From the left side: input, output, ground_truthpip install -r requirements.txt
- Download the facade dataset (base set) http://cmp.felk.cvut.cz/~tylecr1/facade/
python train_facade.py -g [GPU ID, e.g. 0] -i [dataset root directory] --out [output directory] --snapshot_interval 10000
- Wait a few hours...
--out
stores snapshots of the model and example images at an interval defined by--snapshot_interval
- If the model size is large, you can reduce
--snapshot_interval
to save resources.
- Gather image pairs (e.g. label + photo). Several hundred pairs are required for good results.
- Create a copy of
facade_dataset.py
for your dataset. The function get_example should be written so that it returns the i-th image pair a tuple of numpy arrays i.e.(input, output)
. - It maybe necessary to update the loss function in
updater.py
. - Likewise, make a copy of
facade_visualizer.py
and modify to visualize the dataset. - In
train_facade.py
changein_ch
andout_ch
to the correct input and output channels for your data.