Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.loaded_embeddings, .active_embeddings, and .set_active_embeddings should be exposed to the top-level model class from .base_model #382

Closed
4 tasks
eugene-yang opened this issue Jul 3, 2022 · 2 comments · Fixed by #386
Labels
bug Something isn't working

Comments

@eugene-yang
Copy link

Environment info

  • adapter-transformers version: 3.0.1+ (commit 11bd9d2)
  • Platform: Arch Linux
  • Python version: 3.10
  • PyTorch version (GPU?):
  • Tensorflow version (GPU?):
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Information

Model I am using (Bert, XLNet ...): XLMR

Language I am using the model on (English, Chinese ...):

Adapter setup I am using (if any):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

from transformers import AutoAdapterModel, AutoTokenizer
model = AutoAdapterModel.from_pretrained('xlm-roberta-base')

model.base_model.loaded_embeddings 
# > {'default': Embedding(250002, 768, padding_idx=1)}
model.loaded_embeddings 
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
   1183     if name in modules:
   1184         return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
   1186     type(self).__name__, name))

AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'loaded_embeddings'
model.base_model.active_embeddings 
# > 'default'
model.active_embeddings 
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
   1183     if name in modules:
   1184         return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
   1186     type(self).__name__, name))

AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'active_embeddings'
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
tokenizer_new = AutoTokenizer.from_pretrained('xlm-roberta-base')
tokenizer_new.add_tokens(['[unused1]'])

model.add_embeddings('new', tokenizer_new, reference_tokenizer=tokenizer, reference_embedding='default')
model.base_model.active_embeddings
# > 'new'

model.set_active_adapters('default')
File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/heads/base.py:641, in ModelWithFlexibleHeadsAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
    627 def set_active_adapters(
    628     self, adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None
    629 ):
    630     """
    631     Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing
    632     the `adapter_names` parameter in the `foward()` pass. If no adapter with the given name is found, no module of
   (...)
    639             The list of adapters to be activated by default. Can be a fusion or stacking configuration.
    640     """
--> 641     self.base_model.set_active_adapters(adapter_setup, skip_layers)
    642     # use last adapter name as name of prediction head
    643     if self.active_adapters:

File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/model_mixin.py:358, in ModelAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
    356     for adapter_name in adapter_setup.flatten():
    357         if adapter_name not in self.config.adapters.adapters:
--> 358             raise ValueError(
    359                 f"No adapter with name '{adapter_name}' found. Please make sure that all specified adapters are correctly loaded."
    360             )
    362 # Make sure LoRA is reset
    363 self.reset_lora()

ValueError: No adapter with name 'default' found. Please make sure that all specified adapters are correctly loaded.

Expected behavior

As specified in the documentation, .loaded_embeddings, .active_embeddings, and .set_active_embeddings should be available to the top-level model class. At least should be the same as .add_embeddings.
It is currently available through .base_model but not really ideal.

@eugene-yang eugene-yang added the bug Something isn't working label Jul 3, 2022
@calpt
Copy link
Member

calpt commented Jul 7, 2022

Thanks for reporting, this is indeed an unintended breaking change recently introduced in the master branch. Should be reversed with the merge of #386.

@eugene-yang
Copy link
Author

Thanks, @calpt!

calpt added a commit that referenced this issue Jul 11, 2022
- Introduces a new `EmbeddingAdaptersWrapperMixin` to make embedding methods available to heads model classes. This is implemented in new per-model heads mixins. Closes #382.
- Fixes size issues with embeddings. Closes #383.
- Detach embedding weights before cloning. Closes #384.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants