-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PT2E][Quant] Refactor quantizer and qnnpack qantizer code to support dqlinear config #99399
Conversation
… dqlinear config This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/99399
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit f38d2ba: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… dqlinear config This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) ghstack-source-id: 186387917 Pull Request resolved: #99399
_QUANT_CONFIG_TO_ANNOTATOR = {} | ||
|
||
|
||
def register_annotator(quantization_configs: List[QuantizationConfig]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this indexed by quantization_config? are we still planning to support module fqn/type based config or other existing features in QConfigMapping or do we just want to support global qconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the way I imagined this to be used is like this:
qnnpack, i will rename this to xnnpack, quantizer has two configs:
symmetric_static_config
symmetric_dynamic_config.
THis decorator registers annotation function for each.
Now if I want to quantize a model with dynamic q linear for linear and static quant for the rest, I would do it like this with composable quantizer.
dq_linear_config = QNNPACKQuantizer.get_dynamic_linear_config()
dq_linear_quantizer = QNNPACKQuantizer().set_global(dq_linear_config)
static_qconfig = QNNPACKQuantizer.get_symmetric_config()
static_quantizer = QNNPACKQuantizer().set_global(static_config)
composed_quantizer = ComposableQuantizer([dq_linear_quantizer, static_quantizer])
We can alternatively do this via:
qnnpack_quantizer = QNNPACKQuantizer().set_global(static_config)
# following api doesnt yet exist
qnnpack_quantizer.set_module_type_config(torch.nn.Linear, QNNPACKQuantizer.get_dynamic_linear_config()
I think both are fine. RIght now I am doing this mainly because it felt somewhat easier to use, but I am open to just doing via set_module_config
or similar API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me think a bit more. If it makes sense to just set_module_type_config api, then I will remove the config-to-annotator mapping part from the PR and follow up separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I thought about this and I dont think we should do
dq_linear_config = QNNPACKQuantizer.get_dynamic_linear_config()
dq_linear_quantizer = QNNPACKQuantizer().set_global(dq_linear_config)
static_qconfig = QNNPACKQuantizer.get_symmetric_config()
static_quantizer = QNNPACKQuantizer().set_global(static_config)
composed_quantizer = ComposableQuantizer([dq_linear_quantizer, static_quantizer])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But for now I will use annotator api until we do this better
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
just to confirm the plan is to follow up with set_module_type_config api right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
accepting to unblock for now, we can revisit the API a bit later
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… dqlinear config Pull Request resolved: #99399 This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. ghstack-source-id: 187514867 Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/)
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
… to support dqlinear config" This diff introduces a few refactors: - Move observer creation to utils.py. - Use quantization spec to supply args to observers. - Use annotation function registration corresponding QuantizationConfig. This will be later used in dynamic quantized linear. Differential Revision: [D45073790](https://our.internmc.facebook.com/intern/diff/D45073790/) [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
This diff introduces a few refactors:
will be later used in dynamic quantized linear.
Differential Revision: D45073790