This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Add way to initialize SrlBert without pretrained BERT weights #257
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes allenai/allennlp#5170.
You can avoid caching/loading pretrained BERT weights by setting the
bert_model
parameter ofSrlBert
to a dictionary that corresponds to theBertConfig
from HuggingFace. You'll also need a local copy of the config and vocab to avoid downloads from the dataset reader, so the easiest complete work-around would look something like this:You can set up the local files you need by running this:
This is related to allenai/allennlp#5172, but required it's own solution since the
SrlBert
model is a bit of an oddball in that it uses the BERT model class fromtransformers
directly, instead of through AllenNLP'sPretrainedTransformerEmbedder
.