Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion about adding one evaluation paper about LLMs in science #9

Closed
taichengguo opened this issue Jul 20, 2023 · 2 comments
Closed

Comments

@taichengguo
Copy link

taichengguo commented Jul 20, 2023

Thanks for your interesting and comprehensive survey.

If possible, please consider adding our evaluation work about LLMs in chemistry, "What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks" (https://arxiv.org/abs/2305.18365) to the list.

Our work mainly establish a comprehensive benchmark containing 8 practical chemistry tasks to evaluate LLMs (GPT-4, GPT-3.5,and Davinci-003) for each chemistry task in zero-shot and few-shot in-context learning settings. We aim to solve the lack of comprehensive assessment of LLMs in the field of chemistry.

Thanks! 😊

@MLGroupJLU
Copy link
Owner

Thank you for your interest in our paper.
Your research is valuable and well done and we plan to include it in our survey.
After a while, we will upload the updated version of the paper to arXiv, looking forward to your attention.

@taichengguo
Copy link
Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants