Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dual branch and output score normalization #39

Open
KarenAssaraf opened this issue May 29, 2023 · 3 comments
Open

dual branch and output score normalization #39

KarenAssaraf opened this issue May 29, 2023 · 3 comments

Comments

@KarenAssaraf
Copy link

Hey!
Thanks for beeing so responsive.
I have another question again...

in the dual branch,
FC score is a fully connect that ends with a RELU and fc_weight branch ends with a sigmoid. Am I correct that it makes score q between 0 and + infinity.

Is the network supposed to output normalized scores?

@TianheWu
Copy link
Collaborator

Hi! It is a good question! In different perceptual scale, it has different quality score. For easily training the network, we normalize the all dataset score to 0-1. The output should between 0-1. If you want to obtain the original score, you can wrtie a reversed function of normalization.

@KarenAssaraf
Copy link
Author

Thanks a lot!
I am trying to use your network as a backbone, using koniq - pretrained.
I am doing a pairwise finetuning (siamese network scale) . where my data is paired images, and using ranking margin loss.
The think is I see the network output values between -5 and 5... so I was wondering how could be.

@KarenAssaraf
Copy link
Author

Hi, actually, from the code of the dual branch, even though we normalize the label score, it could be that the output is greater than 1.
I think this is due to fc_score branch, that ends with a RELU meaning fc_score value could be greater than 1.
if fc_score is greater than 1, then the dual branch output value _s = torch.sum(f * w) / torch.sum(w), would also be greater than 1.
Am I correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants