Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gguf-py: add support for I8, I16 and I32 #6045

Merged
merged 4 commits into from
Mar 14, 2024

Conversation

certik
Copy link
Contributor

@certik certik commented Mar 13, 2024

These types are documented at https://github.com/ggerganov/ggml/blob/9c2adc4962a3a5d259f10db2171e0df5c83e4b05/docs/gguf.md, and implemented in C at

enum ggml_type {
. This PR adds support for them in the Python GGUF library.

This code is equivalent as before, but now it is prepared to easily add
more NumPy dtypes.
@ggerganov
Copy link
Owner

Will merge after #6050

@ggerganov ggerganov merged commit 3ca2348 into ggerganov:master Mar 14, 2024
21 checks passed
ggerganov added a commit that referenced this pull request Mar 14, 2024
@certik certik deleted the gguf_writer branch March 14, 2024 14:13
@certik
Copy link
Contributor Author

certik commented Mar 14, 2024

@ggerganov my apologies for the tensor_shape vs tensor_dtype mistake --- I just discovered it this morning as well. I thought I tested it carefully, but I missed this. Thanks for fixing it!

@ggerganov
Copy link
Owner

No problem. Btw, I think we need to update the gguf-py version

@certik
Copy link
Contributor Author

certik commented Mar 14, 2024

Btw, I think we need to update the gguf-py version

I sent a PR to do so here: #6060.

NeoZhangJianyu pushed a commit to NeoZhangJianyu/llama.cpp that referenced this pull request Mar 15, 2024
* Refactor dtype handling to be extensible

This code is equivalent as before, but now it is prepared to easily add
more NumPy dtypes.

* Add support for I8, I16 and I32

These types are allowed in the GGUF specification.

* Add support for I8, I16 and I32 to gguf_writer

* Add support for I8, I16, I32 to gguf_reader
NeoZhangJianyu pushed a commit to NeoZhangJianyu/llama.cpp that referenced this pull request Mar 15, 2024
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* Refactor dtype handling to be extensible

This code is equivalent as before, but now it is prepared to easily add
more NumPy dtypes.

* Add support for I8, I16 and I32

These types are allowed in the GGUF specification.

* Add support for I8, I16 and I32 to gguf_writer

* Add support for I8, I16, I32 to gguf_reader
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
mishig25 pushed a commit to huggingface/huggingface.js that referenced this pull request Jun 3, 2024
Bring `GGMLQuantizationType` up to date; adds `I8`, `I16`, `I32`, `I64`,
`F64`, `IQ1_M` and `BF16`.

Added in:
* ggerganov/llama.cpp#6045
* ggerganov/llama.cpp#6062
* ggerganov/llama.cpp#6302
* ggerganov/llama.cpp#6412
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants