Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training giving strange results #19

Closed
calebhemara opened this issue Sep 2, 2022 · 2 comments
Closed

Training giving strange results #19

calebhemara opened this issue Sep 2, 2022 · 2 comments

Comments

@calebhemara
Copy link

calebhemara commented Sep 2, 2022

Hey **Sean, totally am inspired by your work.

I've turned DeepLPF into device-agnostic code, to run on "cpu" (I'm on M1 mac and "mps" is still unreliable). I've been successful in testing images based on your existing checkpoints, however am getting strange results when training on my own data, and I can't figure out what would be giving this "look". I've followed the training data image prep as per your readme file.
input:
img3
groundtruth:
img3
test:
img3_TEST_1_1_PSNR_4 726_SSIM_0 202

Any suggestions?

@sjmoran
Copy link
Owner

sjmoran commented Sep 2, 2022

Hi, thank you for your interest in DeepLPF. My suggestion is to check how you are loading and pre-processing the images for ingestion into DeepLPF. Make sure that the dynamic range is catered for appropriately i.e if 8-bit input images, then you normalise the input by 2^8-1 etc. Check what library you are using to load and how that library works with the image format you are loading, there can be surprises there, depending on the image library and format (e.g. tif). When you do find the issue and solution, please post back here so others can learn from it. Thank you!

@calebhemara
Copy link
Author

Thanks Sean. Turns out it was something pretty obvious, and it wasn't exactly from my data pre-processing, though, for others-- this is the best solution for Lightroom exporting for training. TLDR; 8 bit TIF in ProPhoto RGB.

My issue was that I had assumed there was a torch.load within the training block for starting training from checkpoint, so my results were just from insufficiently trained parameters. I added this and have much better results now. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants