Skip to content

shashi7679/pix2pix-GANs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pix2pix-GANs

We are building pix2pix GANs. For this we would be using PyTorch. We would be using Satellite-Map Image dataset(http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/maps.tar.gz).

For more detailed explaination you may use this Blog

Model Architecture

Since pix2pix is a GAN based architecture, we have one generator which is generating images give some "input", and one discriminator which would discriminate the given image as real or fake. pix2pix is best for Image-to-Image translation, where we have one image from one domain and another image of different domain. Our generator will try to come up with image from second domain given an image from domain one.

Generator architecture is similar to an autoencoder model, where as discriminator's architecture is similar to a binary classifier.

Generator's architecture is similar to U-Net architecture. U-Net Architecture In case of discriminator, it is a patch-wise discriminator. The input given to the discriminator is the concatenation of Image from domain 1 and generated image of domain 2. Model Architectures Generator and Discriminator is being done in Models.py

Dataset Prepration

Since we are working on Satellite Image to Map generator, the dataset which is available consists of the image both satellite image and respective map image side by side. Sample 1Each image in the dataset is of shape (1200, 600, 3). So first we need split the image in that format so that, the dataloader gets the that in (satellite_image, map_image) format. We are also doing basic augmentation to the input in order to make it more our generator more robust. Dataprepration is being done in dataset.py

Hyperparameters

Hyperparametrs Value
Learning Rate 2e-4
beta1 0.5
Batch Size 16
Number of workers 2
Image Size 256
L1_Lambda 100
Lambda_GP 10
Epochs 800

Configuration of these hyperparameters is being done in config.py

Training Results

After 1st Epoch

Output after Epoch 1Satellite Image(left), Map(middle), Generated Map(right)

After 100 Epochs

Output after Epoch 100Satellite Image(left), Map(middle), Generated Map(right)

After 400 Epochs

Output after Epoch 400Satellite Image(left), Map(middle), Generated Map(right)

After 800 Epochs

Output after Epoch 800Satellite Image(left), Map(middle), Generated Map(right)

Generator Loss Vs. Discriminator Loss

Generator Loss Vs. Discriminator Loss

Training

bash download.sh
git clone [email protected]:shashi7679/pix2pix-GANs.git
cd pix2pix-GANs

Run train.ipynb on Jupyter Notebook

  • For training, set LOAD_MODEL as False and SAVE_MODEL as True in config.py
  • For Validation/ Using the saved model, set LOAD_MODEL as True in config.py.
  • To download the pretrained models of validation Click Here

References