Skip to content

Latest commit

 

History

History
42 lines (29 loc) · 2.29 KB

README.md

File metadata and controls

42 lines (29 loc) · 2.29 KB

Unified Multimodal Transformer (UMT) for Multimodal Named Entity Recognition (MNER)

Two MNER Datasets and Codes for our ACL'2020 paper: Improving Multimodal Named Entity Recognition via Entity Span Detection with Unified Multimodal Transformer

Author

Jianfei Yu

[email protected]

July 1, 2020

Data

Requirement

  • PyTorch 1.0.0
  • Python 3.7
  • pytorch-crf 0.7.2

Code Usage

Training for UMT

  • This is the training code of tuning parameters on the dev set, and testing on the test set. Note that you can change "CUDA_VISIBLE_DEVICES=2" based on your available GPUs.
sh run_mtmner_crf.sh
  • We show our running logs on twitter-2015 and twitter-2017 in the folder "log files". Note that the results are a little bit lower than the results reported in our paper, since the experiments were run on different servers.

Evaluation

  • In our codes, we mainly use "seqeval" to compute Micro-F1 as the evaluation metrics. Note that if you use the latest version of seqeval (as it may also report the weighted F1 score), you may need to change our Micro-F1 score parsing code as follows: float(report.split('\n')[-3].split(' ')[-2].split(' ')[-1]) to float(report.split('\n')[-4].split(' ')[-2].split(' ')[-1]).
  • In addition to "seqeval", we also borrow the evaluation code from this repo to compute Micro-F1. The Micro-F1 scores based on these two codes should be the same.

Acknowledgements