From 57f82fb5386f234c8bbe04a0e4864632c158a091 Mon Sep 17 00:00:00 2001 From: anthology-assist Date: Wed, 11 Sep 2024 17:19:00 -0500 Subject: [PATCH] Paper Metadata: {2024.starsem-1.30}, closes #3864. --- data/xml/2024.starsem.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/data/xml/2024.starsem.xml b/data/xml/2024.starsem.xml index 9872757673..2d25741ccc 100644 --- a/data/xml/2024.starsem.xml +++ b/data/xml/2024.starsem.xml @@ -355,10 +355,10 @@ A Trip Towards Fairness: Bias and De-Biasing in Large Language Models LeonardoRanaldiUniversity of Rome Tor Vergata and Idiap Research Institute - ElenaRuzzettiUniversity of Rome Tor Vergata + Elena SofiaRuzzettiUniversity of Rome Tor Vergata DavideVendittiUniversity of Rome Tor Vergata DarioOnoratiSapienza University of Rome - FabioZanzottoUniversity of Rome Tor Vergata + Fabio MassimoZanzottoUniversity of Rome Tor Vergata 372-384 Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (VLLMs) and, thus, may represent the building blocks of many NLP systems solving downstream tasks. Hence, a little or a large bias in CtB-LLMs may cause huge harm. In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable. Indeed, according to current tests, the LLaMA and the OPT families have an important bias in gender, race, religion, and profession. In contrast to the analysis for other LMMs, we discovered that bias depends not on the number of parameters but on the perplexity. Finally, the debiasing of OPT using LORA reduces bias up to 4.12 points in the normalized stereotype score. 2024.starsem-1.30