diff --git a/data/xml/2024.kallm.xml b/data/xml/2024.kallm.xml index 7ded257234..1be0babbb7 100644 --- a/data/xml/2024.kallm.xml +++ b/data/xml/2024.kallm.xml @@ -25,9 +25,9 @@ Multi-hop Database Reasoning with Virtual Knowledge Graph JuheeSon YeonSeonwoo - AliceOhKorea Advanced Institute of Science and Technology - JamesThorneKAIST SeunghyunYoonAdobe Research + JamesThorneKAIST + AliceOhKorea Advanced Institute of Science and Technology 1-11 Application of LLM to database queries on natural language sentences has demonstrated impressive results in both single and multi-hop scenarios.In the existing methodologies, the requirement to re-encode query vectors at each stage for processing multi-hop queries presents a significant bottleneck to the inference speed.This paper proposes VKGFR (Virtual Knowledge Graph based Fact Retriever) that leverages large language models to extract representations corresponding to a sentence’s knowledge graph, significantly enhancing inference speed for multi-hop reasoning without performance loss.Given that both the queries and natural language database sentences can be structured as a knowledge graph, we suggest extracting a Virtual Knowledge Graph (VKG) representation from sentences with LLM.Over the pre-constructed VKG, our VKGFR conducts retrieval with a tiny model structure, showing performance improvements with higher computational efficiency. We evaluate VKGFR on the WikiNLDB and MetaQA dataset, designed for multi-hop database reasoning over text. The results indicate 13x faster inference speed on the WikiNLDB dataset without performance loss. 2024.kallm-1.1 @@ -147,11 +147,16 @@ Improving <fixed-case>LLM</fixed-case>-based <fixed-case>KGQA</fixed-case> for multi-hop Question Answering with implicit reasoning in few-shot examples MiliShahMicrosoft - JingTian + JoyceCahoonMicrosoft + MircoMilletariMicrosoft + JingTianMicrosoft + FotisPsallidasMicrosoft + AndreasMuellerMicrosoft + NickLitombeMicrosoft 125-135 Large language models (LLMs) have shown remarkable capabilities in generating natural language texts for various tasks. However, using LLMs for question answering on knowledge graphs still remains a challenge, especially for questions requiring multi-hop reasoning. In this paper, we present a novel planned query guidance approach that improves large language model (LLM) performance in multi-hop question answering on knowledge graphs (KGQA). We do this by designing few-shot examples that implicitly demonstrate a systematic reasoning methodology to answer multi-hop questions. We evaluate our approach for two graph query languages, Cypher and SPARQL, and show that the queries generated using our strategy outperform the queries generated using a baseline LLM and typical few-shot examples by up to 24.66% and 7.7% in execution match accuracy for the MetaQA and the Spider benchmarks respectively. We also conduct an ablation study to analyze the incremental effects of the different techniques of designing few-shot examples. Our results suggest that our approach enables the LLM to effectively leverage the few-shot examples to generate queries for multi-hop KGQA. 2024.kallm-1.13 - shah-tian-2024-improving + shah-etal-2024-improving