Skip to content

Commit

Permalink
Paper Metadata: {2024.starsem-1.30}, closes #3864.
Browse files Browse the repository at this point in the history
  • Loading branch information
anthology-assist committed Sep 11, 2024
1 parent 7d79460 commit 57f82fb
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions data/xml/2024.starsem.xml
Original file line number Diff line number Diff line change
Expand Up @@ -355,10 +355,10 @@
<paper id="30">
<title>A Trip Towards Fairness: Bias and De-Biasing in Large Language Models</title>
<author><first>Leonardo</first><last>Ranaldi</last><affiliation>University of Rome Tor Vergata and Idiap Research Institute</affiliation></author>
<author><first>Elena</first><last>Ruzzetti</last><affiliation>University of Rome Tor Vergata</affiliation></author>
<author><first>Elena Sofia</first><last>Ruzzetti</last><affiliation>University of Rome Tor Vergata</affiliation></author>
<author><first>Davide</first><last>Venditti</last><affiliation>University of Rome Tor Vergata</affiliation></author>
<author><first>Dario</first><last>Onorati</last><affiliation>Sapienza University of Rome</affiliation></author>
<author><first>Fabio</first><last>Zanzotto</last><affiliation>University of Rome Tor Vergata</affiliation></author>
<author><first>Fabio Massimo</first><last>Zanzotto</last><affiliation>University of Rome Tor Vergata</affiliation></author>
<pages>372-384</pages>
<abstract>Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (VLLMs) and, thus, may represent the building blocks of many NLP systems solving downstream tasks. Hence, a little or a large bias in CtB-LLMs may cause huge harm. In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable. Indeed, according to current tests, the LLaMA and the OPT families have an important bias in gender, race, religion, and profession. In contrast to the analysis for other LMMs, we discovered that bias depends not on the number of parameters but on the perplexity. Finally, the debiasing of OPT using LORA reduces bias up to 4.12 points in the normalized stereotype score.</abstract>
<url hash="4ba0e749">2024.starsem-1.30</url>
Expand Down

0 comments on commit 57f82fb

Please sign in to comment.