Paper abstract: Abstractive summarization algorithms are prone to generating summaries that are factually inconsistent with respect to the source text. In line with this problem, recent work has introduced automatic metrics for evaluating the factual consistency of summaries. However, the proposed top-down machine-learning-based approaches are limited in the inter-annotator agreement on their validation data and their explainability. In addition, the extend to which these models cover all types of factual errors is current unknown. In this exploratory work, we propose a bottom-up detection algorithm that focuses on detecting a single type of error, erroneous references, to complement these top-down approaches, with the goal of constituting a simple, explainable and reliable module for detecting this error type. The proposed approach, based on the novel idea of comparing coreference chains between source and summary text, showed moderate performance on validation data and poor performance on test data. The poor performance was attributed to sub-optimal mention linking between source and summary texts, coreference annotation issues and limitations in the evaluation. Despite the poor performance, the approach yielded valuable insights on characteristics of the erroneous references and their possible causes.
-
Notifications
You must be signed in to change notification settings - Fork 0
Code for the report titled: "Detecting Factually Erroneous References in Abstractive Summarisation Using Coreference Resolution"
License
thomasroodnl/reference-error-detection
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Code for the report titled: "Detecting Factually Erroneous References in Abstractive Summarisation Using Coreference Resolution"
Topics
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published