Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training on SQUAD2 gives less results on Evaluation set (Research Paper shows better results) #9

Closed
bhadreshpsavani opened this issue Jan 21, 2021 · 1 comment

Comments

@bhadreshpsavani
Copy link

bhadreshpsavani commented Jan 21, 2021

I tried Training MPNet on SQUAD2 data below is the result I was getting on Evalutionset

I used this script

(exact, 50.07159100480081)                                                                                  
(f1, 50.07159100480081)
(total, 11873)
(HasAns_exact, 0.0)
(HasAns_f1, 0.0)
(HasAns_total, 5928)
(NoAns_exact, 100.0)
(NoAns_f1, 100.0)
(NoAns_total, 5945)
(best_exact, 50.07159100480081)
(best_exact_thresh, 0.0)
(best_f1, 50.07159100480081)
(best_f1_thresh, 0.0)
@StillKeepTry
Copy link
Contributor

We recommend you to use huggingface version for squad fine-tuning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants