Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant][pt2] Fix QAT convert for mobilenetv2 #104110

Closed
wants to merge 1 commit into from

Conversation

andrewor14
Copy link
Contributor

@andrewor14 andrewor14 commented Jun 23, 2023

Summary:
QAT convert for mobilenetv2 was previously not working
because we incorrectly applied dropout during eval as well as
training. This is because, for exported models, model.eval() does
not change the behavior of dropout, unlike models with torch ops.
This commit simulates the effects of model.eval() for exported
models as well by replacing the aten dropout pattern before eval.
As of this commit, end-to-end QAT numerics now match for
mobilenetv2 between FX and PT2.

Test Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2

Differential Revision: D46750343

cc @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @ipiszy @chenyang78

@pytorch-bot
Copy link

pytorch-bot bot commented Jun 23, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/104110

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit e759384:

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

andrewor14 added a commit to andrewor14/pytorch that referenced this pull request Jul 7, 2023
Summary:
Pull Request resolved: pytorch#104110

QAT convert for mobilenetv2 was previously not working
because we incorrectly applied dropout during eval as well as
training. This is because, for exported models, model.eval() does
not change the behavior of dropout, unlike models with torch ops.
This commit simulates the effects of model.eval() for exported
models as well by replacing the aten dropout pattern before eval.
As of this commit, end-to-end QAT numerics now match for
mobilenetv2 between FX and PT2.

Test Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2

Reviewed By: jerryzh168

Differential Revision: D46750343

fbshipit-source-id: 38a1a027ed8680edc022a4c5a4015c3b5c811438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

Summary:
Pull Request resolved: pytorch#104110

QAT convert for mobilenetv2 was previously not working
because we incorrectly applied dropout during eval as well as
training. This is because, for exported models, model.eval() does
not change the behavior of dropout, unlike models with torch ops.
This commit simulates the effects of model.eval() for exported
models as well by replacing the aten dropout pattern before eval.
As of this commit, end-to-end QAT numerics now match for
mobilenetv2 between FX and PT2.

Test Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2

Reviewed By: jerryzh168

Differential Revision: D46750343

fbshipit-source-id: dcf508c0167c1c362410ca860824299c4b35bab6
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D46750343

@andrewor14
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 11, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants