Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PERF] Decrease compaction RAM usage and increase speed #2729

Merged
merged 13 commits into from
Aug 29, 2024
Merged

Conversation

sanketkedia
Copy link
Contributor

@sanketkedia sanketkedia commented Aug 27, 2024

Description of changes

Summarize the changes made by this PR.

  • Improvements & Bug fixes
    • BTree of block deltas for postings list is now Vec instead of IntArray since the overhead of IntArray was seen to be 2 KB per btree add v/s 816 bytes for Vec
    • When getting blocks from block deltas at commit time, the block deltas are now drained. This reduces RAM consumption by about 50%
    • Removed the intermediary postings list builder that was more of an unnecessary complexity and also was doing clones
    • Changed get() for postings list to read slices instead of Vec getting rid of one more deep copy.

Test plan

How are these changes tested?

  • Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Documentation Changes

None

Copy link

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

let embedding_arr = embedding_builder.values();
for entry in embedding.iter() {
embedding_arr.append_value(*entry);
match Arc::try_unwrap(self.inner) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we do let = match { match { so that we can avoid the nesting?

Copy link

Please tag your PR title with one of: [ENH | BUG | DOC | TST | BLD | PERF | TYP | CLN | CHORE]. See https://docs.trychroma.com/contributing#contributing-code-and-ideas

@sanketkedia sanketkedia changed the title Change data type from intarray32 to vec [ENH] Decrease compaction RAM usage and increase speed Aug 28, 2024
@HammadB HammadB changed the title [ENH] Decrease compaction RAM usage and increase speed [PERF] Decrease compaction RAM usage and increase speed Aug 28, 2024
@codetheweb
Copy link
Contributor

Perf results, comparing the latest in this branch against 14a744b with the other tokenizer and locking updates:

Time profiling

Follows the same experiment structure as #2736.

sanket-branch-final-cpu.trace.zip

Before (see trace in #2736):

  • First compaction: 33s
  • Second compaction: 40s

After (run 1):

  • First compaction: 29s
  • Second compaction: 36s

Memory profiling

https://wormhole.app/Z26BY#aW8Ij5sW14ORvbbGpkPXiw

  1. Added 5,000 documents
  2. Compacted
  3. Added 5,000 documents
  4. Compacted

Before (run 2):

  • First compaction peak: 600 MiB
  • Second compaction peak: 1146 MiB

After (run 1):

  • First compaction peak: 523 MiB
  • Second compaction peak: 1064 MiB

@sanketkedia sanketkedia merged commit 6337df5 into main Aug 29, 2024
67 checks passed
spikechroma pushed a commit that referenced this pull request Sep 12, 2024
## Description of changes

*Summarize the changes made by this PR.*
 - Improvements & Bug fixes
- BTree of block deltas for postings list is now Vec<i32> instead of
IntArray since the overhead of IntArray was seen to be 2 KB per btree
add v/s 816 bytes for Vec<i32>
- When getting blocks from block deltas at commit time, the block deltas
are now drained. This reduces RAM consumption by about 50%
- Removed the intermediary postings list builder that was more of an
unnecessary complexity and also was doing clones
- Changed get() for postings list to read slices instead of Vec<i32>
getting rid of one more deep copy.

## Test plan
*How are these changes tested?*
- [x] Tests pass locally with `pytest` for python, `yarn test` for js,
`cargo test` for rust

## Documentation Changes
None
spikechroma pushed a commit that referenced this pull request Sep 12, 2024
*Summarize the changes made by this PR.*
 - Improvements & Bug fixes
- BTree of block deltas for postings list is now Vec<i32> instead of
IntArray since the overhead of IntArray was seen to be 2 KB per btree
add v/s 816 bytes for Vec<i32>
- When getting blocks from block deltas at commit time, the block deltas
are now drained. This reduces RAM consumption by about 50%
- Removed the intermediary postings list builder that was more of an
unnecessary complexity and also was doing clones
- Changed get() for postings list to read slices instead of Vec<i32>
getting rid of one more deep copy.

*How are these changes tested?*
- [x] Tests pass locally with `pytest` for python, `yarn test` for js,
`cargo test` for rust

None
@HammadB HammadB mentioned this pull request Sep 17, 2024
1 task
HammadB added a commit that referenced this pull request Sep 17, 2024
## Description of changes

*Summarize the changes made by this PR.*
 - Improvements & Bug fixes
- In #2729 we changed to UInt32Array but some old data may be
Int32Array. This is a rather ugly hack to preserve that behavior.
 - New functionality
	 - None

## Test plan
*How are these changes tested?*
We are scoping cross-version tests, which are needed in general.
- [x] Tests pass locally with `pytest` for python, `yarn test` for js,
`cargo test` for rust

## Documentation Changes
None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants