-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml-gobject: Add concept of execution memory and remove hardcoded memory estimates #11
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We'll use this type to encapsulate the notion of a per-instance memory
This is a stop-gap, since we'll be moving the execution buffer elsewhere.
It got pulled in indirectly, but it should be used directly.
This is if you want to create a set of allocated weights without creating the whole model
Basically instead of taking GBytes and having memory entangled with the model weights, these are now separate concepts. So create_model_desc also returns a GGMLLanguageModelDesc with separate weight tree descriptions for the memory weights and the model weights, where there will be one set of memory weights per inference instance as opposed to per-model.
Lets not re-use the main computation context, as that needs to be preserved in a special way.
This can be used to build the compute graph without actually building the executing the graph, which can be useful for memory allocation.
These are based on the newly added ggml_alloc context mode in ggml, which use an allocator to specify graph memory layout as opposed to the naive linear allocator. We can also use the allocator to compute how much memory is actually required. The general flow would be that you first run the forward pass with worst-case inputs in "recorder" mode to compute a maximal memory usage profile. The recorder mode sets tensor data addresses to a region that doesn't exist in memory and also takes care to ensure that writes to the tensor using the ggml-gobject API don't actually write anything to memory. Then afterwards, you can allocate a buffer of the required size, then use the allocator in "alloc" mode to create the same layout, this time backed by a real buffer. Using the alloc mode is fairly cheap, since it doesn't require any system calls (all the memory is allocated upfront).
…mory size With this we can finally remove the semi-hardcoded memory estimator for GPT2 models and instead use a real estimate based on the model's actual memory usage.
This is superceded #12 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is another API/ABI break
Previously we had to hardcode the runtime memory usage. Now GGML has a smarter allocator which can properly
estimate the graph memory usage. This requires some changes to ggml-gobject as well.
This also means that per-instance memory is no longer global - instead it is now per-cursor.