Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix authors for two 2024.kallm papers #3754

Merged
merged 3 commits into from
Aug 15, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 9 additions & 4 deletions data/xml/2024.kallm.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
<title>Multi-hop Database Reasoning with Virtual Knowledge Graph</title>
<author><first>Juhee</first><last>Son</last></author>
<author><first>Yeon</first><last>Seonwoo</last></author>
<author><first>Alice</first><last>Oh</last><affiliation>Korea Advanced Institute of Science and Technology</affiliation></author>
<author><first>James</first><last>Thorne</last><affiliation>KAIST</affiliation></author>
<author><first>Seunghyun</first><last>Yoon</last><affiliation>Adobe Research</affiliation></author>
<author><first>James</first><last>Thorne</last><affiliation>KAIST</affiliation></author>
<author><first>Alice</first><last>Oh</last><affiliation>Korea Advanced Institute of Science and Technology</affiliation></author>
<pages>1-11</pages>
<abstract>Application of LLM to database queries on natural language sentences has demonstrated impressive results in both single and multi-hop scenarios.In the existing methodologies, the requirement to re-encode query vectors at each stage for processing multi-hop queries presents a significant bottleneck to the inference speed.This paper proposes VKGFR (Virtual Knowledge Graph based Fact Retriever) that leverages large language models to extract representations corresponding to a sentence’s knowledge graph, significantly enhancing inference speed for multi-hop reasoning without performance loss.Given that both the queries and natural language database sentences can be structured as a knowledge graph, we suggest extracting a Virtual Knowledge Graph (VKG) representation from sentences with LLM.Over the pre-constructed VKG, our VKGFR conducts retrieval with a tiny model structure, showing performance improvements with higher computational efficiency. We evaluate VKGFR on the WikiNLDB and MetaQA dataset, designed for multi-hop database reasoning over text. The results indicate 13x faster inference speed on the WikiNLDB dataset without performance loss.</abstract>
<url hash="d9dd4e2f">2024.kallm-1.1</url>
Expand Down Expand Up @@ -147,11 +147,16 @@
<paper id="13">
<title>Improving <fixed-case>LLM</fixed-case>-based <fixed-case>KGQA</fixed-case> for multi-hop Question Answering with implicit reasoning in few-shot examples</title>
<author><first>Mili</first><last>Shah</last><affiliation>Microsoft</affiliation></author>
<author><first>Jing</first><last>Tian</last></author>
<author><first>Joyce</first><last>Cahoon</last><affiliation>Microsoft</affiliation></author>
<author><first>Mirco</first><last>Milletari</last><affiliation>Microsoft</affiliation></author>
<author><first>Jing</first><last>Tian</last></author><affiliation>Microsoft</affiliation></author>
<author><first>Fotis</first><last>Psallidas</last><affiliation>Microsoft</affiliation></author>
<author><first>Andreas</first><last>Mueller</last><affiliation>Microsoft</affiliation></author>
<author><first>Nick</first><last>Litombe</last><affiliation>Microsoft</affiliation></author>
<pages>125-135</pages>
<abstract>Large language models (LLMs) have shown remarkable capabilities in generating natural language texts for various tasks. However, using LLMs for question answering on knowledge graphs still remains a challenge, especially for questions requiring multi-hop reasoning. In this paper, we present a novel planned query guidance approach that improves large language model (LLM) performance in multi-hop question answering on knowledge graphs (KGQA). We do this by designing few-shot examples that implicitly demonstrate a systematic reasoning methodology to answer multi-hop questions. We evaluate our approach for two graph query languages, Cypher and SPARQL, and show that the queries generated using our strategy outperform the queries generated using a baseline LLM and typical few-shot examples by up to 24.66% and 7.7% in execution match accuracy for the MetaQA and the Spider benchmarks respectively. We also conduct an ablation study to analyze the incremental effects of the different techniques of designing few-shot examples. Our results suggest that our approach enables the LLM to effectively leverage the few-shot examples to generate queries for multi-hop KGQA.</abstract>
<url hash="9e621cf2">2024.kallm-1.13</url>
<bibkey>shah-tian-2024-improving</bibkey>
<bibkey>shah-etal-2024-improving</bibkey>
</paper>
</volume>
</collection>
Loading