Skip to content

Commit

Permalink
Paper Revision{2024.acl-long.329}, closes #3896.
Browse files Browse the repository at this point in the history
  • Loading branch information
anthology-assist committed Sep 18, 2024
1 parent 9cc02a1 commit 4f52bdc
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -4266,8 +4266,10 @@
<author><first>Anette</first><last>Frank</last><affiliation>Ruprecht-Karls-Universität Heidelberg</affiliation></author>
<pages>6048-6089</pages>
<abstract>Large language models (LLMs) can explain their predictions through post-hoc or Chain-of-Thought (CoT) explanations. But an LLM could make up reasonably sounding explanations that are unfaithful to its underlying reasoning. Recent work has designed tests that aim to judge the faithfulness of post-hoc or CoT explanations. In this work we argue that these faithfulness tests do not measure faithfulness to the models’ inner workings – but rather their self-consistency at output level.Our contributions are three-fold: i) We clarify the status of faithfulness tests in view of model explainability, characterising them as self-consistency tests instead. This assessment we underline by ii) constructing a Comparative Consistency Bank for self-consistency tests that for the first time compares existing tests on a common suite of 11 open LLMs and 5 tasks – including iii) our new self-consistency measure CC-SHAP. CC-SHAP is a fine-grained measure (not a test) of LLM self-consistency. It compares how a model’s input contributes to the predicted answer and to generating the explanation. Our fine-grained CC-SHAP metric allows us iii) to compare LLM behaviour when making predictions and to analyse the effect of other consistency tests at a deeper level, which takes us one step further towards measuring faithfulness by bringing us closer to the internals of the model than strictly surface output-oriented tests.</abstract>
<url hash="ab7d495d">2024.acl-long.329</url>
<url hash="8a044b2a">2024.acl-long.329</url>
<bibkey>parcalabescu-frank-2024-measuring</bibkey>
<revision id="1" href="2024.acl-long.329v1" hash="ab7d495d"/>
<revision id="2" href="2024.acl-long.329v2" hash="8a044b2a" date="2024-09-18">This revision mentions a sponsor in the acknowledgements and fixes the typo in Eq. 4.</revision>
</paper>
<paper id="330">
<title>Learning or Self-aligning? Rethinking Instruction Fine-tuning</title>
Expand Down

0 comments on commit 4f52bdc

Please sign in to comment.