Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KonIQ pretrained model hyperparameters #35

Open
Mishra1995 opened this issue May 11, 2023 · 5 comments
Open

KonIQ pretrained model hyperparameters #35

Mishra1995 opened this issue May 11, 2023 · 5 comments

Comments

@Mishra1995
Copy link

Hello authors,

Thanks for open sourcing this repository!
I had one query regarding the pre-trained model shared for KonIQ dataset. In the paper you mentioned the following:

Snip20230511_1

I understood the following the previous IQA works, you split the dataset into 8:2 ratio five times using five different seeds. And during the test time, you took image crops of size 224x224 20 times and reported the average results.

But can you explain the following two points:

  1. What do you mean by, "the final score is generated by predicting the mean score of these 20 images and all results are averaged by 10 times split". As far as I understood, the split created were 5 right?

  2. The checkpoint you have provided for KonIQ is giving the best results on the val split created by one of the seed values right? (Please correct me if I am wrong in the understanding). Can you please share the hyperparameters of the this model then if this is the one the seed model. Or the metrics reported are from some ensemble model?

Kindly clarify,

Thanks!

@TianheWu
Copy link
Collaborator

  1. This is an error in our paper. Thanks. Each image is test by averaging cropped 20 times results.
  2. During the paper writing period, we didn't test our model on KonIQ dataset. I test it not long ago. I just split dataset one time with seed 2 or 20 (Sorry I forget it.). But, during the other experiments, I found that MANIQA has a stable performance on KonIQ datast. (Remember resizeing image into 224x224 not crop on koniq training stage)

@Mishra1995
Copy link
Author

  1. Thanks for clarifying that.
  2. Sure no issues in that. Can you tell me then in general how to take the best model for evaluation if let's say the same model was trained on datasets splits(8:2) created by different seeds . We would have obtained no. of model instances equivalent to the splits created.

@Mishra1995
Copy link
Author

Hi @TianheWu ,

It would be really helpful if you could please share your insights to the above query?

@TianheWu
Copy link
Collaborator

Hi, I just see that.
Sorry, I can not get your mean. The split(8:2) is random.

@Mishra1995
Copy link
Author

Thanks for the reply, I understand that! My query is that when you divide the dataset let's say for eg that you took KonIQ and suppose you divided that using 5 random seeds. For let's say final deployment, which model will you select? Will you test the model on a separate held out set and check your model's performance on that held out set for all the best models obtained for the 5 splits?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants