Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PT2E][Quant] Refactor quantizer and qnnpack qantizer code to support dqlinear config #99399

Closed
wants to merge 10 commits into from

Conversation

kimishpatel
Copy link
Contributor

@kimishpatel kimishpatel commented Apr 18, 2023

Stack from ghstack (oldest at bottom):

This diff introduces a few refactors:

  • Move observer creation to utils.py.
  • Use quantization spec to supply args to observers.
  • Use annotation function registration corresponding QuantizationConfig. This
    will be later used in dynamic quantized linear.

Differential Revision: D45073790

… dqlinear config

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Apr 18, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/99399

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit f38d2ba:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions bot added the release notes: quantization release notes category label Apr 18, 2023
kimishpatel added a commit that referenced this pull request Apr 18, 2023
… dqlinear config

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

ghstack-source-id: 186387917
Pull Request resolved: #99399
_QUANT_CONFIG_TO_ANNOTATOR = {}


def register_annotator(quantization_configs: List[QuantizationConfig]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this indexed by quantization_config? are we still planning to support module fqn/type based config or other existing features in QConfigMapping or do we just want to support global qconfig?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the way I imagined this to be used is like this:
qnnpack, i will rename this to xnnpack, quantizer has two configs:
symmetric_static_config
symmetric_dynamic_config.

THis decorator registers annotation function for each.

Now if I want to quantize a model with dynamic q linear for linear and static quant for the rest, I would do it like this with composable quantizer.

dq_linear_config = QNNPACKQuantizer.get_dynamic_linear_config()
dq_linear_quantizer = QNNPACKQuantizer().set_global(dq_linear_config)

static_qconfig = QNNPACKQuantizer.get_symmetric_config()
static_quantizer = QNNPACKQuantizer().set_global(static_config)

composed_quantizer = ComposableQuantizer([dq_linear_quantizer, static_quantizer])

We can alternatively do this via:

qnnpack_quantizer = QNNPACKQuantizer().set_global(static_config)
# following api doesnt yet exist
qnnpack_quantizer.set_module_type_config(torch.nn.Linear, QNNPACKQuantizer.get_dynamic_linear_config()

I think both are fine. RIght now I am doing this mainly because it felt somewhat easier to use, but I am open to just doing via set_module_config or similar API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me think a bit more. If it makes sense to just set_module_type_config api, then I will remove the config-to-annotator mapping part from the PR and follow up separately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I thought about this and I dont think we should do

dq_linear_config = QNNPACKQuantizer.get_dynamic_linear_config()
dq_linear_quantizer = QNNPACKQuantizer().set_global(dq_linear_config)

static_qconfig = QNNPACKQuantizer.get_symmetric_config()
static_quantizer = QNNPACKQuantizer().set_global(static_config)

composed_quantizer = ComposableQuantizer([dq_linear_quantizer, static_quantizer])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But for now I will use annotator api until we do this better

… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
@jerryzh168
Copy link
Contributor

just to confirm the plan is to follow up with set_module_type_config api right?

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accepting to unblock for now, we can revisit the API a bit later

… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
kimishpatel added a commit that referenced this pull request Apr 28, 2023
… dqlinear config

Pull Request resolved: #99399

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.
ghstack-source-id: 187514867

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
… to support dqlinear config"

This diff introduces a few refactors:

- Move observer creation to utils.py.
- Use quantization spec to supply args to observers.
- Use annotation function registration corresponding QuantizationConfig. This
  will be later used in dynamic quantized linear.

Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)

[ghstack-poisoned]
@kimishpatel
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 3, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/kimishpatel/132/head branch June 8, 2023 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: AO frontend release notes: quantization release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants