Painters can successfully recover severely damaged objects, yet current inpainting algorithms still can not achieve this ability. Generally, painters will have a conjecture about the seriously missing image before restoring it, which can be expressed in a text description. This paper imitates the process of painters' conjecture, and proposes to introduce the text description into the image inpainting task for the first time, which provides abundant guidance information for image restoration through the fusion of multimodal features. We propose a multimodal fusion learning method for image inpainting (MMFL). To make better use of text features, we construct an image-adaptive word demand module to reasonably filter the effective text features. We introduce a text guided attention loss and a text-image matching loss to make the network pay more attention to the entities in the text description. Extensive experiments prove that our method can better predict the semantics of objects in the missing regions and generate fine grained textures.
We use CUB-200-2011, Flowers and CelebA datasets. Download these datasets and save them to data/
.
- Follow AttnGAN to pre-train the DAMSM model and save it to
DAMSMencoders/
. - Divide the dataset into train/test set. Run
python train.py
- Run
python test.py --checkpoint path_to_checkpoint --save_dir path_to_save_dir
@InProceedings{Lin_2020_MMFL,
Author = {Qing Lin and Bo Yan and Jichun Li and Weimin Tan},
Title = {MMFL: Multimodal Fusion Learning for Text-Guided Image Inpainting},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia (MM ’20)},
month = {October},
year = {2020}
}
We benefit a lot from CSA, CSA_pytorch and AttnGAN.