A source code for paper "W-Net: Structure and Texture Interaction for Image Inpainting"
Compared to previous methods, our method produces perfect structure and symmetrical objects when repairing corrupted regions in the image. (a) The ground truth image with blue shadow as mask. (b) The result of GConv. (c) The result of MEDFE. (d) The result of our W-Net.
- Ubuntu 16.04
- Python 3
- NVIDIA GPU CUDA + cuDNN
- TensorFlow 1.12.0
- Clone this repo:
git clone https://github.com/Evergrow/W-Net.git
cd W-Net
- Setup environment: Install TensorFlow and dependencies.
- Download datasets: We use Places2, CelebA-HQ, and Paris Street-View datasets. Some common inpainting datasets such as CelebA and ImageNet are also available.
- Collect masks: Please refer to this script to process raw mask QD-IMD as the training mask. Liu et al. provides 12k irregular masks as the test mask. Note that the square mask is not a good choice for training our model, while the test mask is freestyle.
- Modify gpu id, dataset path, mask path, and checkpoint path in the config file. Adjusting some other parameters if you like.
- Run
python train.py
and view training progresstensorboard --logdir [path to checkpoints]
Choose the input image, mask and model to test:
python test.py --image [input path] --mask [mask path] --output [output path] --checkpoint_dir [model path]
Pretrained models are released for quick test. Download the models using Google Drive links and move them into your ./checkpoints directory.