Launch all required model training using the tree_lstms/scripts/submit_train_arithmetic.sh <SETUP>
script.
- Launch it twice, using
bcm_pp
andbcm_tt
as setups. - The latter can only be started if the former finished running.
- Afterwards, reproduce the graphs contained in the paper in section 4 using the
analysis/arithmetic/visualise.ipynb
notebook.
Launch all required model training using the tree_lstms/scripts/submit_train_sentiment.sh <SETUP>
script.
- Launch it twice, using
bcm_pp
andbcm_tt
as setups. - The latter can only be started if the former finished running.
Afterwards, reproduce the graphs contained in the paper using the following notebooks:
- Figure 5:
analysis/sentiment/visualise_task_performance.ipynb
- Figure 6: you can obtain the predictions from
analysis/sentiment/run_baseline.py
. The figure was created by hand afterwards. - Figure 7:
analysis/sentiment/compare_to_baseline.ipynb
Launch all required model training using the sentiment_training/scripts/submit_train.sh
script.
Afterwards, reproduce the graphs contained in the paper using the analysis/sentiment/visualise_task_performance.ipynb
notebook.
First obtain:
- Topographic similarity:
python topographic_similarity.py
- Memorization: memorization values of Zhang et al., that extracts memorization values as demonstrated in their notebook https://github.com/xszheng2020/memorization/blob/master/sst/05_Atypical_Phase.ipynb
Then visualise Figures 15 and 16 with the analysis/sentiment/alternative_metrics.ipynb
notebook.
@inproceedings{dankers2022recursive,
title={Recursive Neural Networks with Bottlenecks Diagnose (Non-) Compositionality},
author={Dankers, Verna and Titov, Ivan},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2022},
pages={4361--4378},
year={2022}
}