Skip to content

Commit

Permalink
Paper Revision{2021.findings-emnlp.96}, closes #3888.
Browse files Browse the repository at this point in the history
  • Loading branch information
anthology-assist committed Sep 17, 2024
1 parent 536512f commit 520683f
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion data/xml/2021.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -7958,13 +7958,15 @@
<author><first>Albert Y.S.</first><last>Lam</last></author>
<pages>1114–1120</pages>
<abstract>This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train language models such as BERT on a vast amount of unlabeled corpus, we find it highly effective and efficient to simply fine-tune BERT with a small set of labeled utterances from public datasets. Specifically, fine-tuning BERT with roughly 1,000 labeled data yields a pre-trained model – IntentBERT, which can easily surpass the performance of existing pre-trained models for few-shot intent classification on novel domains with very different semantics. The high effectiveness of IntentBERT confirms the feasibility and practicality of few-shot intent detection, and its high generalization ability across different domains suggests that intent classification tasks may share a similar underlying structure, which can be efficiently learned from a small set of labeled data. The source code can be found at <url>https://github.com/hdzhang-code/IntentBERT</url>.</abstract>
<url hash="2ae86f5a">2021.findings-emnlp.96</url>
<url hash="297df895">2021.findings-emnlp.96</url>
<bibkey>zhang-etal-2021-effectiveness-pre</bibkey>
<doi>10.18653/v1/2021.findings-emnlp.96</doi>
<video href="2021.findings-emnlp.96.mp4"/>
<pwcdataset url="https://paperswithcode.com/dataset/banking77">BANKING77</pwcdataset>
<pwcdataset url="https://paperswithcode.com/dataset/hint3">HINT3</pwcdataset>
<pwcdataset url="https://paperswithcode.com/dataset/hwu64">HWU64</pwcdataset>
<revision id="1" href="2021.findings-emnlp.96v1" hash="2ae86f5a"/>
<revision id="2" href="2021.findings-emnlp.96v2" hash="297df895" date="2024-09-17">Changes the order of the authors.</revision>
</paper>
<paper id="97">
<title>Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment</title>
Expand Down

0 comments on commit 520683f

Please sign in to comment.