Skip to content

ufal/multilexnorm2021

Repository files navigation

ÚFAL at MultiLexNorm 2021:

Improving Multilingual Lexical Normalization by Fine-tuning ByT5


David Samuel & Milan Straka

Charles University
Faculty of Mathematics and Physics
Institute of Formal and Applied Linguistics


Paper
Interactive demo on Google Colab
HuggingFace models

Illustration of our model.



This is the official repository for the winning entry to the W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm) shared task, which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.

Our system is based on ByT5, which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these source files, we also release the fine-tuned models on HuggingFace (TODO) and an interactive demo on Google Colab.



How to run

🐾   Clone repository and install the Python requirements

git clone https://github.com/ufal/multilexnorm2021.git
cd multilexnorm2021

pip3 install -r requirements.txt 

🐾   Initialize

Run the inialization script to download the official MultiLexNorm data together with a dump of English Wikipedia. To replicate our results, you can download the preprocessed dumps for all languages here. To use more fresh sources, we recommend downloading Wikipidia dumps to get the clean multi-lingual data.

./initialize.sh

🐾   Train

To train a model for English lexical normalization, simply run the following script. Other configurations are located in the config folder.

python3 train.py --config config/en.yaml


Please cite the following publication

@inproceedings{wnut-ufal,
  title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
  author = "Samuel, David and Straka, Milan",
  booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
  year = "2021",
  publisher = "Association for Computational Linguistics",
  address = "Punta Cana, Dominican Republic"
}