diff --git a/data/xml/2021.findings.xml b/data/xml/2021.findings.xml
index d4652661ea..9890bc3539 100644
--- a/data/xml/2021.findings.xml
+++ b/data/xml/2021.findings.xml
@@ -7958,13 +7958,15 @@
Albert Y.S.Lam
1114–1120
This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train language models such as BERT on a vast amount of unlabeled corpus, we find it highly effective and efficient to simply fine-tune BERT with a small set of labeled utterances from public datasets. Specifically, fine-tuning BERT with roughly 1,000 labeled data yields a pre-trained model – IntentBERT, which can easily surpass the performance of existing pre-trained models for few-shot intent classification on novel domains with very different semantics. The high effectiveness of IntentBERT confirms the feasibility and practicality of few-shot intent detection, and its high generalization ability across different domains suggests that intent classification tasks may share a similar underlying structure, which can be efficiently learned from a small set of labeled data. The source code can be found at https://github.com/hdzhang-code/IntentBERT.
- 2021.findings-emnlp.96
+ 2021.findings-emnlp.96
zhang-etal-2021-effectiveness-pre
10.18653/v1/2021.findings-emnlp.96
BANKING77
HINT3
HWU64
+
+ Changes the order of the authors.
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment