Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about CAM attention map #5

Open
838959823 opened this issue Jul 30, 2023 · 4 comments
Open

A question about CAM attention map #5

838959823 opened this issue Jul 30, 2023 · 4 comments

Comments

@838959823
Copy link

1690702183240
In this picture, why IL and IH share the same attention map generated by Mask extractor when IL and IH are unpaired.

@Ysz2022
Copy link
Owner

Ysz2022 commented Jul 30, 2023

We conducted extensive experiments on the location (before Enhance and Degrade modules) and number (whether Enhance module and Degrade module share the same attention map or not) of attention map IA, and found that the design we adopt in the paper performs best. We attribute it to two factors.

  1. Our purpose is to enhance better, so we extract IA from low-light input IL to guide enhancement. As IA also guides degradation in the right, to obtain more realistic I~L, Mask Extractor (ME) is trained to be able to exploit more universal degradation features from IL.

  2. Using specific IA for each Enhance and Degrade module needs more ME, making training more challenging to converge. You can see that as NeRCo has contained many components, we develop too many loss terms (i.e., Cooperative Loss (CL)) to constrain them. If we adopt more ME, the difficulty of training will inevitably increase.

@838959823
Copy link
Author

I mean you can use the same ME. But given different image (unpaired like IL, IH), the attention map should be generated by the the same ME while using different input.

@Ysz2022
Copy link
Owner

Ysz2022 commented Jul 30, 2023

In this paper, ME is trained to depict low-light distribution of the degraded input. If inputting a clean image and degrade it following the guidance of ME, ME needs to be able to imagine possible dark regions based on normal-light images, which also increase training difficulty. At least our experiments show that this setting performs a little weaker than the seeting we final adopted.

@838959823
Copy link
Author

i understand now. Thank you for your time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants