Skip to content

Releases: adapter-hub/adapters

v3.1.0

15 Sep 09:39
Compare
Choose a tag to compare

Based on transformers v4.21.3

New

New adapter methods

New model integrations

  • Add Deberta and DebertaV2 integration(@hSterz via #340)
  • Add Vision Transformer integration (@calpt via #363)

Misc

Changed

Fixed

  • Infer label names for training for flex head models (@calpt via #367)
  • Ensure root dir exists when saving all adapters/heads/fusions (@calpt via #375)
  • Avoid attempting to set prediction head if non-existent (@calpt via #377)
  • Fix T5EncoderModel adapter integration (@calpt via #376)
  • Fix loading adapters together with full model (@calpt via #378)
  • Multi-gpu support for prefix-tuning (@alexanderhanboli via #359)
  • Fix issues with embedding training (@calpt via #386)
  • Fix initialization of added embeddings (@calpt via #402)
  • Fix model serialization using torch.save() & torch.load() (@calpt via #406)

v3.0.1

18 May 08:48
Compare
Choose a tag to compare

Based on transformers v4.17.0

New

  • Support float reduction factors in bottleneck adapter configs (@calpt via #339)

Fixed

  • [AdapterTrainer] add missing preprocess_logits_for_metrics argument (@stefan-it via #317)
  • Fix save_all_adapters such that with_head is not ignored (@hSterz via #325)
  • Fix inferring batch size for prefix tuning (@calpt via #335)
  • Fix bug when using compacters with AdapterSetup context (@calpt via #328)
  • [Trainer] Fix issue with AdapterFusion and load_best_model_at_end (@calpt via #341)
  • Fix generation with GPT-2, T5 and Prefix Tuning (@calpt via #343)

v3.0.0

23 Mar 19:47
Compare
Choose a tag to compare

Based on transformers v4.17.0

New

Efficient Fine-Tuning Methods

Misc

  • Introduce XAdapterModel classes as central & recommended model classes (@calpt via #289)
  • Introduce ConfigUnion class for flexible combination of adapter configs (@calpt via #292)
  • Add AdapterSetup context manager to replace adapter_names parameter (@calpt via #257)
  • Add ForwardContext to wrap model forward pass with adapters (@calpt via #267, #295)
  • Search all remote sources when passing source=None (new default) to load_adapter() (@calpt via #309)

Changed

  • Deprecate XModelWithHeads in favor of XAdapterModel (@calpt via #289)
  • Refactored adapter integration into model classes and model configs (@calpt via #263, #304)
  • Rename activation functions to match Transformers' names (@hSterz via #298)
  • Upgrade of underlying transformers version (@calpt via #311)

Fixed

v2.3.0

09 Feb 17:49
Compare
Choose a tag to compare

Based on transformers v4.12.5

New

Changed

  • Unify built-in & custom head implementation (@hSterz via #252)
  • Upgrade of underlying transformers version (@calpt via #255)

Fixed

  • Fix documentation and consistency issues for AdapterFusion methods (@calpt via #259)
  • Fix serialization/ deserialization issues with custom adapter config classes (@calpt via #253)

v2.2.0

14 Oct 10:11
Compare
Choose a tag to compare

Based on transformers v4.11.3

New

Model support

Prediction heads

  • AutoModelWithHeads prediction heads for language modeling (@calpt via #210)
  • AutoModelWithHeads prediction head & training example for dependency parsing (@calpt via #208)

Training

Misc

  • Add get_adapter_info() method (@calpt via #220)
  • Add set_active argument to add & load adapter/fusion/head methods (@calpt via #214)
  • Minor improvements for adapter card creation for HF Hub upload (@calpt via #225)

Changed

  • Upgrade of underlying transformers version (@calpt via #232, #234, #239 )
  • Allow multiple AdapterFusion configs per model; remove set_adapter_fusion_config() (@calpt via #216)

Fixed

  • Incorrect referencing between adapter layer and layer norm for DataParallel (@calpt via #228)

v2.1.0

08 Jul 14:20
Compare
Choose a tag to compare

Based on transformers v4.8.2

New

Integration into HuggingFace's Model Hub

  • Add support for loading adapters from HuggingFace Model Hub (@calpt via #162)
  • Add method to push adapters to HuggingFace Model Hub (@calpt via #197)
  • Learn more

BatchSplit adapter composition

Various new features

Changed

  • Deprecate add_fusion() and train_fusion() in favor of add_adapter_fusion() and train_adapter_fusion() (@calpt via #190)

Fixed

  • Suppress no-adapter warning when adapter_names is given (@calpt via #186)
  • leave_out in load_adapter() when loading language adapters from Hub (@hSterz via #177)

v2.0.1

28 May 18:53
Compare
Choose a tag to compare

Based on transformers v4.5.1

New

  • Allow different reduction factors for different adapter layers (@hSterz via #161)
  • Allow dynamic dropping of adapter layers in load_adapter() (@calpt via #172)
  • Add method get_adapter() to retrieve weights of an adapter (@hSterz via #169)

Changed

  • Re-add adapter_names argument to model forward() methods (@calpt via #176)

Fixed

  • Fix resolving of adapter from Hub when multiple options available (@Aaronsom via #164)
  • Fix & improve adapter saving/ loading using Trainer class (@calpt via #178)

v2.0.0

29 Apr 10:41
Compare
Choose a tag to compare

Based on transformers v4.5.1

All major new features & changes are described at https://docs.adapterhub.ml/v2_transition.

  • all changes merged via #105

Additional changes & Fixes

  • Support loading adapters with load_best_model_at_end in Trainer (@calpt via #122)
  • Add setter for active_adapters property (@calpt via #132)
  • New notebooks for NER, text generation & AdapterDrop (@hSterz via #135)
  • Enable trainer to load adapters from checkpoints (@hSterz via #138)
  • Update & clean up example scripts (@hSterz via #154 & @calpt via #141, #155)
  • Add unfreeze_adapters param to train_fusion() (@calpt via #156)
  • Ensure eval/ train mode is correct for AdapterFusion (@calpt via #157)

v1.1.1

14 Jan 21:13
Compare
Choose a tag to compare

Based on transformers v3.5.1

New

  • Modular & custom prediction heads for flex head models (@hSterz via #88)

Fixed

  • Fixes for DistilBERT layer norm and AdapterFusion (@calpt via #102)
  • Fix for reloading full models with AdapterFusion (@calpt via #110)
  • Fix attention and logits output for flex head models (@calpt via #103 & #111)
  • Fix loss output of flex model with QA head (@hSterz via #88)

v1.1.0

30 Nov 15:54
Compare
Choose a tag to compare

Based on transformers v3.5.1

New

  • New model with adapter support: DistilBERT (@calpt via #67)
  • Save label->id mapping of the task together with the adapter prediction head (@hSterz via #75)
  • Automatically set matching label->id mapping together with active prediction head (@hSterz via #81)
  • Upgraded underlying transformers version (@calpt via #55, #72 and #85)
  • Colab notebook tutorials showcasing all AdapterHub concepts (@calpt via #89)

Fixed

  • Support for models with flexible heads in pipelines (@calpt via #80)
  • Adapt input to models with flexible heads to static prediction heads input (@calpt via #90)