Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2303.16634] G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment #901

Open
1 task
ShellLM opened this issue Aug 20, 2024 · 1 comment
Open
1 task
Labels
human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets llm-experiments experiments with large language models MachineLearning ML Models, Training and Inference Papers Research papers

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Aug 20, 2024

[2303.16634] G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment

Snippet

"The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts."

Suggested labels

None

@ShellLM ShellLM added ai-leaderboards leaderdoards for llm's and other ml models llm Large Language Models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets llm-experiments experiments with large language models Papers Research papers labels Aug 20, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Aug 20, 2024

Related content

#823 similarity score: 0.9
#895 similarity score: 0.88
#811 similarity score: 0.87
#898 similarity score: 0.86
#830 similarity score: 0.86
#813 similarity score: 0.86

@irthomasthomas irthomasthomas added MachineLearning ML Models, Training and Inference human-verified <INST>NEVER PICK THIS LABEL</INST> and removed ai-leaderboards leaderdoards for llm's and other ml models labels Aug 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets llm-experiments experiments with large language models MachineLearning ML Models, Training and Inference Papers Research papers
Projects
None yet
Development

No branches or pull requests

2 participants