Skip to content

Commit

Permalink
Paper Metadata: {2023.findings-acl.706}, closes #3891.
Browse files Browse the repository at this point in the history
  • Loading branch information
anthology-assist committed Sep 17, 2024
1 parent 23221f5 commit 2fbf0cd
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions data/xml/2023.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11993,9 +11993,9 @@
<title>Revisit Few-shot Intent Classification with <fixed-case>PLM</fixed-case>s: Direct Fine-tuning vs. Continual Pre-training</title>
<author><first>Haode</first><last>Zhang</last><affiliation>The Hong Kong Polytechnic University</affiliation></author>
<author><first>Haowen</first><last>Liang</last><affiliation>The Hong Kong Polytechnic University</affiliation></author>
<author><first>Li-Ming</first><last>Zhan</last><affiliation>The Hong Kong Polytechnic University</affiliation></author>
<author><first>Xiao-Ming</first><last>Wu</last><affiliation>Hong Kong Polytechnic University</affiliation></author>
<author><first>Liming</first><last>Zhan</last><affiliation>The Hong Kong Polytechnic University</affiliation></author>
<author><first>Albert Y.S.</first><last>Lam</last><affiliation>Fano Labs</affiliation></author>
<author><first>Xiao-Ming</first><last>Wu</last><affiliation>Hong Kong Polytechnic University</affiliation></author>
<pages>11105-11121</pages>
<abstract>We consider the task of few-shot intent detection, which involves training a deep learning model to classify utterances based on their underlying intents using only a small amount of labeled data. The current approach to address this problem is through continual pre-training, i.e., fine-tuning pre-trained language models (PLMs) on external resources (e.g., conversational corpora, public intent detection datasets, or natural language understanding datasets) before using them as utterance encoders for training an intent classifier. In this paper, we show that continual pre-training may not be essential, since the overfitting problem of PLMs on this task may not be as serious as expected. Specifically, we find that directly fine-tuning PLMs on only a handful of labeled examples already yields decent results compared to methods that employ continual pre-training, and the performance gap diminishes rapidly as the number of labeled data increases. To maximize the utilization of the limited available data, we propose a context augmentation method and leverage sequential self-distillation to boost performance. Comprehensive experiments on real-world benchmarks show that given only two or more labeled samples per class, direct fine-tuning outperforms many strong baselines that utilize external data sources for continual pre-training. The code can be found at <url>https://github.com/hdzhang-code/DFTPlus</url>.</abstract>
<url hash="268745c5">2023.findings-acl.706</url>
Expand Down

0 comments on commit 2fbf0cd

Please sign in to comment.