Releases: adapter-hub/adapters
Releases · adapter-hub/adapters
v3.1.0
Based on transformers v4.21.3
New
New adapter methods
- Add LoRA implementation (@calpt via #334, #399): Documentation
- Add (IA)^3 implementation (@calpt via #396): Documentation
- Add UniPELT implementation (@calpt via #407): Documentation
New model integrations
- Add
Deberta
andDebertaV2
integration(@hSterz via #340) - Add Vision Transformer integration (@calpt via #363)
Misc
- Add
adapter_summary()
method (@calpt via #371): More info - Return AdapterFusion attentions using
output_adapter_fusion_attentions
argument (@calpt via #417): Documentation
Changed
Fixed
- Infer label names for training for flex head models (@calpt via #367)
- Ensure root dir exists when saving all adapters/heads/fusions (@calpt via #375)
- Avoid attempting to set prediction head if non-existent (@calpt via #377)
- Fix T5EncoderModel adapter integration (@calpt via #376)
- Fix loading adapters together with full model (@calpt via #378)
- Multi-gpu support for prefix-tuning (@alexanderhanboli via #359)
- Fix issues with embedding training (@calpt via #386)
- Fix initialization of added embeddings (@calpt via #402)
- Fix model serialization using
torch.save()
&torch.load()
(@calpt via #406)
v3.0.1
Based on transformers v4.17.0
New
Fixed
- [AdapterTrainer] add missing
preprocess_logits_for_metrics
argument (@stefan-it via #317) - Fix save_all_adapters such that with_head is not ignored (@hSterz via #325)
- Fix inferring batch size for prefix tuning (@calpt via #335)
- Fix bug when using compacters with AdapterSetup context (@calpt via #328)
- [Trainer] Fix issue with AdapterFusion and
load_best_model_at_end
(@calpt via #341) - Fix generation with GPT-2, T5 and Prefix Tuning (@calpt via #343)
v3.0.0
Based on transformers v4.17.0
New
Efficient Fine-Tuning Methods
- Add Prefix Tuning (@calpt via #292)
- Add Parallel adapters & Mix-and-Match adapter (@calpt via #292)
- Add Compacter (@hSterz via #297)
Misc
- Introduce
XAdapterModel
classes as central & recommended model classes (@calpt via #289) - Introduce
ConfigUnion
class for flexible combination of adapter configs (@calpt via #292) - Add
AdapterSetup
context manager to replaceadapter_names
parameter (@calpt via #257) - Add
ForwardContext
to wrap model forward pass with adapters (@calpt via #267, #295) - Search all remote sources when passing
source=None
(new default) toload_adapter()
(@calpt via #309)
Changed
- Deprecate
XModelWithHeads
in favor ofXAdapterModel
(@calpt via #289) - Refactored adapter integration into model classes and model configs (@calpt via #263, #304)
- Rename activation functions to match Transformers' names (@hSterz via #298)
- Upgrade of underlying transformers version (@calpt via #311)
Fixed
v2.3.0
v2.2.0
Based on transformers v4.11.3
New
Model support
T5
adapter implementation (@AmirAktify & @hSterz via #182)EncoderDecoderModel
adapter implementation (@calpt via #222)
Prediction heads
AutoModelWithHeads
prediction heads for language modeling (@calpt via #210)AutoModelWithHeads
prediction head & training example for dependency parsing (@calpt via #208)
Training
- Add a new
AdapterTrainer
for training adapters (@hSterz via #218, #241 ) - Enable training of
Parallel
block (@hSterz via #226)
Misc
- Add get_adapter_info() method (@calpt via #220)
- Add set_active argument to add & load adapter/fusion/head methods (@calpt via #214)
- Minor improvements for adapter card creation for HF Hub upload (@calpt via #225)
Changed
- Upgrade of underlying transformers version (@calpt via #232, #234, #239 )
- Allow multiple AdapterFusion configs per model; remove
set_adapter_fusion_config()
(@calpt via #216)
Fixed
v2.1.0
Based on transformers v4.8.2
New
Integration into HuggingFace's Model Hub
- Add support for loading adapters from HuggingFace Model Hub (@calpt via #162)
- Add method to push adapters to HuggingFace Model Hub (@calpt via #197)
- Learn more
BatchSplit
adapter composition
BatchSplit
composition block for adapters and heads (@hSterz via #177)- Learn more
Various new features
- Add automatic conversion of static heads when loaded via XModelWithHeads (@calpt via #181)
Learn more - Add
list_adapters()
method to search for adapters (@calpt via #193)
Learn more - Add delete_adapter(), delete_adapter_fusion() and delete_head() methods (@calpt via #189)
- MAD-X 2.0 WikiAnn NER notebook (@hSterz via #187)
- Upgrade of underlying transformers version (@hSterz via #183, @calpt via #194 & #200)
Changed
- Deprecate add_fusion() and train_fusion() in favor of add_adapter_fusion() and train_adapter_fusion() (@calpt via #190)
Fixed
v2.0.1
v2.0.0
Based on transformers v4.5.1
All major new features & changes are described at https://docs.adapterhub.ml/v2_transition.
- all changes merged via #105
Additional changes & Fixes
- Support loading adapters with load_best_model_at_end in Trainer (@calpt via #122)
- Add setter for active_adapters property (@calpt via #132)
- New notebooks for NER, text generation & AdapterDrop (@hSterz via #135)
- Enable trainer to load adapters from checkpoints (@hSterz via #138)
- Update & clean up example scripts (@hSterz via #154 & @calpt via #141, #155)
- Add unfreeze_adapters param to train_fusion() (@calpt via #156)
- Ensure eval/ train mode is correct for AdapterFusion (@calpt via #157)
v1.1.1
Based on transformers v3.5.1
New
Fixed
v1.1.0
Based on transformers v3.5.1
New
- New model with adapter support: DistilBERT (@calpt via #67)
- Save label->id mapping of the task together with the adapter prediction head (@hSterz via #75)
- Automatically set matching label->id mapping together with active prediction head (@hSterz via #81)
- Upgraded underlying transformers version (@calpt via #55, #72 and #85)
- Colab notebook tutorials showcasing all AdapterHub concepts (@calpt via #89)