Skip to content

Commit

Permalink
Paper Metadata: {2024.acl-long.693}, closes #3876.
Browse files Browse the repository at this point in the history
  • Loading branch information
anthology-assist committed Sep 17, 2024
1 parent e5776ec commit 9cc02a1
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -8984,15 +8984,15 @@
<bibkey>zhao-etal-2024-tapera</bibkey>
</paper>
<paper id="693">
<title><fixed-case>K</fixed-case>nowledge<fixed-case>FM</fixed-case>ath: A Knowledge-Intensive Math Reasoning Dataset in Finance Domains</title>
<title>FinanceMATH: Knowledge-Intensive Math Reasoning in Finance Domains</title>
<author><first>Yilun</first><last>Zhao</last><affiliation>Yale University</affiliation></author>
<author><first>Hongjun</first><last>Liu</last></author>
<author><first>Yitao</first><last>Long</last><affiliation>New York University</affiliation></author>
<author><first>Rui</first><last>Zhang</last><affiliation>Pennsylvania State University</affiliation></author>
<author><first>Chen</first><last>Zhao</last><affiliation>New York University Shanghai</affiliation></author>
<author><first>Arman</first><last>Cohan</last><affiliation>Yale University and Allen Institute for Artificial Intelligence</affiliation></author>
<pages>12841-12858</pages>
<abstract>We introduce KnowledgeFMath, a novel benchmark designed to evaluate LLMs’ capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, KnowledgeFMath includes 1,259 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 26 LLMs with different prompting strategies like Chain-of-Thought and Program-of-Thought. Our experimental results reveal that the current best-performing system (i.e., GPT-4 with CoT prompting) achieves only 56.6% accuracy, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve their performance (e.g., from 33.5% to 47.1% for GPT-3.5), their accuracy remains significantly lower than the estimated human expert performance of 92%. We believe that KnowledgeFMath can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving math reasoning problems.</abstract>
<abstract>We introduce FinanceMath, a novel benchmark designed to evaluate LLMs' capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, FinanceMath includes 1,200 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 44 LLMs with both Chain-of-Thought and Program-of-Thought prompting methods. Our experimental results reveal that the current best-performing system (i.e., GPT-4o) achieves only 60.9% accuracy using CoT prompting, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve model performance (e.g., from 47.5% to 54.5% for Gemini-1.5-Pro), their accuracy remains significantly lower than the estimated human expert performance of 92%. We believe that FinanceMath can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving reasoning-intensive tasks.</abstract>
<url hash="ba7e748c">2024.acl-long.693</url>
<bibkey>zhao-etal-2024-knowledgefmath</bibkey>
<revision id="1" href="2024.acl-long.693v1" hash="e2e21860"/>
Expand Down

0 comments on commit 9cc02a1

Please sign in to comment.