Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

v2.5.0

Compare
Choose a tag to compare
@epwalsh epwalsh released this 03 Jun 17:36
· 74 commits to main since this release

allennlp-models release corresponding to allennlp v2.5.0.

What's new

Changed ⚠️

  • Updated all instances of sanity_checks to confidence_checks.
  • The num_serialized_models_to_keep parameter is now called keep_most_recent_by_count.
  • Improvements to the vision models and other models that use allennlp.modules.transformer under the hood.

Added 🎉

  • Added tests for checklist suites for SQuAD-style reading comprehension models (bidaf), and textual entailment models (decomposable_attention and esim).
  • Added an optional "weight" parameter to CopyNetSeq2Seq.forward() for calculating a weighted loss instead of the simple average over the
    the negative log likelihoods for each instance in the batch.
  • Added a way to initialize the SrlBert model without caching/loading pretrained transformer weights.
    You need to set the bert_model parameter to the dictionary form of the corresponding BertConfig from HuggingFace.
    See PR #257 for more details.
  • Added a beam_search parameter to the generation models so that a BeamSearch object can be specified in their configs.
  • Added a binary gender bias-mitigated RoBERTa model for SNLI.

Commits

a98e13a Specify BeamSearch as a parameter (#267)
5dcf2b9 Added binary gender bias-mitigated RoBERTa model for SNLI (#268)
79d25e5 tick version for nightly release
50a0452 Checkpointing (#269)
07f1b56 Update nr-interface requirement from <0.0.4 to <0.0.6 (#266)
8bf4e1c cancel redundant GH Actions builds (#270)
2f1b779 Update roberta-sst.json (#264)
dea182c Avoid duplicate tokenization of context in training (#263)
dc633f1 Updates for transformer toolkit changes (#261)
53c61dd Renaming sanity_checks to confidence_checks (#262)
3ec87c7 set codecov to 'informational' mode
45068bb Vgqa dataset reader (#260)
77315fc Add weighting option to CopyNet (#258)
845fe4c Add way to initialize SrlBert without pretrained BERT weights (#257)
ab1e86a Checklist tests (#255)
659c71f Update pretrained.py: Quick fix to be able to load pertained_models directly to GPU. (#254)