-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformer.predict: do not broadcast to listeners #345
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The output of a transformer is passed through in two different ways: - Prediction: the data is passed through the `Doc._.trf_data` attribute. - Training: the data is broadcast directly to the transformer's listeners. However, the `Transformer.predict` method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners. However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in `annotating_components`. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer. This change removes the broadcast from the `predict` method. If a listener does not receive a batch, attempt to get the transformer output from the `Doc` instances. This makes it possible to train a pipeline with a frozen transformer.
danieldk
added
bug
Something isn't working
feat / pipeline
Feature: Pipeline components
labels
Aug 31, 2022
svlandeg
reviewed
Sep 7, 2022
svlandeg
reviewed
Sep 7, 2022
svlandeg
reviewed
Sep 7, 2022
Co-authored-by: Sofie Van Landeghem <[email protected]>
adrianeboyd
added a commit
to adrianeboyd/spacy-transformers
that referenced
this pull request
Feb 11, 2023
3 tasks
adrianeboyd
added a commit
that referenced
this pull request
Feb 13, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The output of a transformer is passed through in two different ways:
Doc._.trf_data
attribute.However, the
Transformer.predict
method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners.However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in
annotating_components
. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.This change removes the broadcast from the
predict
method. If a listener does not receive a batch, attempt to get the transformer output from theDoc
instances. This makes it possible to train a pipeline with a frozen transformer.This ports explosion/spaCy#11385 to
spacy-transformers
. Alternative to #342.