-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3) #404
Conversation
Note that PyTorch name for e4m3 type has an extra "fn" prefix to match MLIR, but the format should be the same ("fn" means "finite"). We also test that -0.5 roundtrips in both formats, which makes sure that the format is preserved properly - both types are single-byte and have the same representation for zero, but different representations for -0.5.
|
Really need this! |
NVidia GPUs support two fp8 types: e5m2 and e4m3. PyTorch supports both from version 2.1; note that safetensors currently does not support these fully, but it will once this PR gets merged: huggingface/safetensors#404 This change implements initial support for e5m2. e4m3 should be a better fit in general, but: - It has a smaller exponent range so it requires weight adjustment to fit into this range; Llama2 works fine without it but Mistral breaks due to small weights that get rounded to zero. - More critically, NV GPUs only support fp8 to half/float conversion natively since Hopper (SM9.0). fp8e5m2 has a fast emulation path because it has the same exponent range as fp16 (similarly to bfloat16, conversion just requires zero padding), but fp8e4m3 emulation is impractically slow. We currently just use builtin PyTorch conversion which results in an aggregate ~0.5% perplexity drop. This probably can be improved in the future. Warp-parallel matmul needs to process 4 elements at a time now so that we keep loading 4b per thread to maximize effective bandwidth.
Thanks a lot for this PR, sorry I missed it when you published it. |
NVidia GPUs support two fp8 types: e5m2 and e4m3. PyTorch supports both from version 2.1; note that safetensors currently does not support these fully, but it will once this PR gets merged: huggingface/safetensors#404 This change implements initial support for e5m2. e4m3 should be a better fit in general, but: - It has a smaller exponent range so it requires weight adjustment to fit into this range; Llama2 works fine without it but Mistral breaks due to small weights that get rounded to zero. - More critically, NV GPUs only support fp8 to half/float conversion natively since Hopper (SM9.0). fp8e5m2 has a fast emulation path because it has the same exponent range as fp16 (similarly to bfloat16, conversion just requires zero padding), but fp8e4m3 emulation is impractically slow. We currently just use builtin PyTorch conversion which results in an aggregate ~0.5% perplexity drop. This probably can be improved in the future. Warp-parallel matmul needs to process 4 elements at a time now so that we keep loading 4b per thread to maximize effective bandwidth.
This PR completes support for float8 types by making them available when using safetensors from Python with PyTorch; float8 types are supported by PyTorch since July (pytorch/pytorch#104242).
Note that PyTorch name for e4m3 type has an extra "fn" prefix to match MLIR, but the format should be the same ("fn" means "finite").
The added test checks that -0.5 roundtrips in both formats - both types are single-byte and have the same representation for zero, but different representations for -0.5.