Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor QModuleMixin and Calibration and fix stream-lining bug #249

Merged
merged 9 commits into from
Jul 19, 2024

Conversation

dacorvo
Copy link
Collaborator

@dacorvo dacorvo commented Jul 19, 2024

What does this PR do?

This heavily refactors QModuleMixin and Calibration to simplify the quantization of inputs and outputs.

This also fixes a bug in streamline mode: instead of disabling only the quantization of the outputs of a quantized module if they are immediately dequantized, it also disabled the quantization of inputs.
With that fix, the inference of int8/int8 models can be up to 50% faster because of the accelerated torch._int_mm.

This reveals an overflow in qfloat8_e5m2 activations test, that is removed for now.
Only QLinear might request its inputs to be always quantized, as it
is the only layer for which an optimized kernel exists.
By putting the input/output quantization code inside module forward hooks,
it allows them to be called only after the calibration hooks.
This simplifies a lot the calibration code, in particular avoiding several
calls to forward during output calibration.
@dacorvo dacorvo merged commit bb95382 into main Jul 19, 2024
12 checks passed
@dacorvo dacorvo deleted the refactor_qmodule branch July 19, 2024 15:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant