Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Quant][PT2E] Enable X86InductorQuantizer single quantizable op(maxpool2d) #105639

Conversation

leslie-fang-intel
Copy link
Collaborator

@leslie-fang-intel leslie-fang-intel commented Jul 20, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 20, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/105639

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 2cd113c with merge base 97a291f (image):

UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@leslie-fang-intel leslie-fang-intel marked this pull request as draft July 20, 2023 04:30
leslie-fang-intel added a commit that referenced this pull request Jul 20, 2023
…ol2d)

ghstack-source-id: a1331e5412b957705ec32a869490e60ec894ef2f
Pull Request resolved: #105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_non_quantizable_op_after_force_int8_int8_op
```


[ghstack-poisoned]
@leslie-fang-intel leslie-fang-intel added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 20, 2023
leslie-fang-intel added a commit to leslie-fang-intel/pytorch that referenced this pull request Jul 24, 2023
…ol2d)

ghstack-source-id: a1331e5412b957705ec32a869490e60ec894ef2f
Pull Request resolved: pytorch#105639
leslie-fang-intel added a commit that referenced this pull request Jul 24, 2023
…ol2d)

ghstack-source-id: 0ace32d1cf1434f9c55e3bee6f18922242a14a4d
Pull Request resolved: #105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
leslie-fang-intel added a commit that referenced this pull request Jul 24, 2023
…ol2d)

ghstack-source-id: d482e5dcaf6bb2537b7733f4debffc1136e532f2
Pull Request resolved: #105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
@@ -28,14 +29,83 @@
get_source_partitions,
SourcePartition,
)
from .quantizer import QuantizationAnnotation, QuantizationSpec, Quantizer
from .quantizer import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: probably better to import from "torch.ao.quantization.quantizer"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for comment and changed.

…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
leslie-fang-intel added a commit to leslie-fang-intel/pytorch that referenced this pull request Aug 22, 2023
…ol2d)

ghstack-source-id: 0a0d7a11ebfb995a2d840d82667a193e92da62ee
Pull Request resolved: pytorch#105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
list(maxpool_node.users)[0].target == operator.getitem
)
getitem_node = list(node.users)[0]
if not _is_all_annotated([getitem_node, maxpool_node]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably want to skip if any of the node is annotated? otherwise you will be breaking some existing annotations?

Copy link
Collaborator Author

@leslie-fang-intel leslie-fang-intel Aug 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think here we add this check because we expected getitem_node and maxpool_node already been annotated in previous step of _annotation_propagation_quantizable_pattern which only annotates the inputs of these 2 nodes. Then, here we will annotate the output of getitem_node here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I feel this is error prone, since these might be annotated because of other reasons, I think the more robust way to annotate things would be to have a self contained meaningful pattern and just annotate that pattern by itself.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. We may discuss about it further.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accepting to unblock

leslie-fang-intel added a commit to leslie-fang-intel/pytorch that referenced this pull request Aug 25, 2023
…ol2d)

ghstack-source-id: 52b2a7858869e23c64838d1e70e4e82aa3c9c824
Pull Request resolved: pytorch#105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
leslie-fang-intel added a commit to leslie-fang-intel/pytorch that referenced this pull request Aug 25, 2023
…ol2d)

ghstack-source-id: 427a8dff03e45e0d76073ccca839d7ef63c6a1c4
Pull Request resolved: pytorch#105639
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
…le op(maxpool2d)"


**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```


[ghstack-poisoned]
@leslie-fang-intel
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Aug 26, 2023
**Summary**
Enable the `dq-maxpool2d-q` pattern match and lower into `torch.ops.quantized.max_pool2d`.

**Test Plan**
```
python -m pytest test_mkldnn_pattern_matcher.py -k test_qmaxpool2d
python -m pytest test_quantized_op.py -k test_max_pool2d_pt2e
```

Pull Request resolved: #105906
Approved by: https://github.com/jgong5, https://github.com/eellison
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639
pytorchmergebot pushed a commit that referenced this pull request Aug 26, 2023
**Summary**
After oneDNN 3.1 upgrade, we don't need to do the weight scale reciprocal calculation. So, remove the redundant reciprocal calculation to optimize QConv performance and using IDeep version API to implement it in this PR:

- This QConv implementation expects to work functionally both with current IDeep version and the following IDeep upgrade in PR: #107565.
- With the following IDeep upgrade in PR: #107565, the QConv has better performance since the redundant reciprocal calculation are removed.

Pull Request resolved: #105996
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906
pytorchmergebot pushed a commit that referenced this pull request Aug 26, 2023
…ht scale reciprocal calculation (#107565)

**Summary**
Upgrade IDeep which includes 1 IDeep change as IDeep PR: intel/ideep#226

- For IDeep PR: intel/ideep#226 which has done 2 things:

  - Remove the redundant QConv weight scale reciprocal calculation.
  - Pump IDEEP_VERSION_REVISION version from 0 to 1.

  So only QConv related calculation will be impacted and we already use IDeep version API in #105996 to make the corresponding change in PyTorch.

Pull Request resolved: #107565
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906, #105996
voznesenskym pushed a commit that referenced this pull request Aug 27, 2023
…ol2d) (#105639)

**Summary**
In this PR, we mainly enable 2 things.

- Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`.
- Add quantization recipe of `maxpool2d` and annotate it as input./output share observer.

**Test Plan**
```
python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe
```

Pull Request resolved: #105639
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456
voznesenskym pushed a commit that referenced this pull request Aug 27, 2023
**Summary**
Enable the `dq-maxpool2d-q` pattern match and lower into `torch.ops.quantized.max_pool2d`.

**Test Plan**
```
python -m pytest test_mkldnn_pattern_matcher.py -k test_qmaxpool2d
python -m pytest test_quantized_op.py -k test_max_pool2d_pt2e
```

Pull Request resolved: #105906
Approved by: https://github.com/jgong5, https://github.com/eellison
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639
voznesenskym pushed a commit that referenced this pull request Aug 27, 2023
**Summary**
After oneDNN 3.1 upgrade, we don't need to do the weight scale reciprocal calculation. So, remove the redundant reciprocal calculation to optimize QConv performance and using IDeep version API to implement it in this PR:

- This QConv implementation expects to work functionally both with current IDeep version and the following IDeep upgrade in PR: #107565.
- With the following IDeep upgrade in PR: #107565, the QConv has better performance since the redundant reciprocal calculation are removed.

Pull Request resolved: #105996
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906
voznesenskym pushed a commit that referenced this pull request Aug 27, 2023
…ht scale reciprocal calculation (#107565)

**Summary**
Upgrade IDeep which includes 1 IDeep change as IDeep PR: intel/ideep#226

- For IDeep PR: intel/ideep#226 which has done 2 things:

  - Remove the redundant QConv weight scale reciprocal calculation.
  - Pump IDEEP_VERSION_REVISION version from 0 to 1.

  So only QConv related calculation will be impacted and we already use IDeep version API in #105996 to make the corresponding change in PyTorch.

Pull Request resolved: #107565
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906, #105996
@facebook-github-bot facebook-github-bot deleted the gh/leslie-fang-intel/61/head branch August 29, 2023 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: quantization release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants