You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm having some issues creating a template-to-image output. I have created a paired dataset of images, pairing a created template to its related original image and the cropped checkpoints showed decent fake outputs. However, when I try to use my test set, a collection of just templates, as a single image, the output generated is a bit blurry.
Here is my current train command: python train.py --dataroot ./datasets/mask_good/ --name mask_good --model pix2pix --direction AtoB --preprocess crop --load_size 2048 --crop_size 1024 --n_epochs 1000 --n_epochs_decay 1000 --save_epoch_freq 50
Here is the test command: python test.py --dataroot ./datasets/mask_good/ --name mask_good --model test --load_size 2048 --crop_size 2048 --direction AtoB --dataset_mode single --netG unet_256 --norm batch
The generated results were decent when I was testing with 512x512. The current image is 2048x2048 (two 1024*2048 images side by side)
An additional issue I noticed is that the training crop rarely included the edge of the image, which I'm guessing is causing the "frame" of the image to not be generated correctly.
The text was updated successfully, but these errors were encountered:
Hi, I'm having some issues creating a template-to-image output. I have created a paired dataset of images, pairing a created template to its related original image and the cropped checkpoints showed decent fake outputs. However, when I try to use my test set, a collection of just templates, as a single image, the output generated is a bit blurry.
Here is my current train command:
python train.py --dataroot ./datasets/mask_good/ --name mask_good --model pix2pix --direction AtoB --preprocess crop --load_size 2048 --crop_size 1024 --n_epochs 1000 --n_epochs_decay 1000 --save_epoch_freq 50
Here is the test command:
python test.py --dataroot ./datasets/mask_good/ --name mask_good --model test --load_size 2048 --crop_size 2048 --direction AtoB --dataset_mode single --netG unet_256 --norm batch
The generated results were decent when I was testing with 512x512. The current image is 2048x2048 (two 1024*2048 images side by side)
An additional issue I noticed is that the training crop rarely included the edge of the image, which I'm guessing is causing the "frame" of the image to not be generated correctly.
The text was updated successfully, but these errors were encountered: