-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize memory allocation of changesets with many small strings #5614
Conversation
src/realm/sync/changeset.hpp
Outdated
if (m_string_buffer->capacity() - m_string_buffer->size() < string.size() && | ||
string.size() < small_string_buffer_size) { | ||
m_string_buffer->reserve(m_string_buffer->capacity() + small_string_buffer_size); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to instead use exponential growth with a minimum size? Growing by 1024 bytes each time makes the constant factor a lot better but still leaves it quadratic time.
if (m_string_buffer->capacity() - m_string_buffer->size() < string.size() && | |
string.size() < small_string_buffer_size) { | |
m_string_buffer->reserve(m_string_buffer->capacity() + small_string_buffer_size); | |
} | |
if (m_string_buffer->capacity() - m_string_buffer->size() < string.size()) { | |
m_string_buffer->reserve(std::max(std::max(small_string_buffer_size, m_string_buffer->capacity() * 2), m_string_buffer->size() + string.size())); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was going to suggest this exact change, but then I thought "wait don't vectors/strings amortize their growth?" At least on libc++ strings double in size when they exceed their capacity until you reach a capacity that's half the size of the maximum capacity of the string. Then I read the cppreference page for reserve() and realized we're probably shooting ourselves in the foot here. https://en.cppreference.com/w/cpp/string/basic_string/reserve. Until c++20:
- If new_cap is less than the current capacity(), this is a non-binding shrink request.
- If new_cap is less than the current size(), this is a non-binding shrink-to-fit request equivalent to shrink_to_fit() (since C++11).
I suspect that what's going on here is that we reserve 1024, and then for every additional string that we append past the first 1024 bytes, we effectively shrink_to_fit the string. If we change this to only call reserve if the capacity is less than 1024, do you still see the slowdown?
Btw, this behavior goes away in C++20.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, I wouldn't have guessed that reserve()
would do a shrink_to_fit()
. I guess I'm glad c++20 fixes that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, good analysis. I will change this to let the string implementation do its own capacity management and ask Eric to see if it is more effective.
@ericjordanmossman confirmed that these changes dropped the customer download from ~70s to ~8s on iOS 14. |
A customer (HELP-34650) reported slow downloads on iOS 14 and android. Profiling sync on an iOS 14 device revealed tons of time being spent requesting memory. I tried to reproduce this on mac, and ubuntu but could not. @ericjordanmossman profiled an actual iOS 14 device, and we found a bottleneck in
InstructionBuilder::add_string_range
:I noticed that changesets were mostly adding small strings (~10-50 bytes) such that
m_string_buffer
has to constantly request more memory, but only by a little bit each time once it outgrows the initial 1024 reserved capacity. The fix is to keep reserving 1k buffer space in chunks rather than increasing a little bit each time. I think this is not a problem on machines with more memory, because if the initial allocation uses memory that has additional space after the requested block, then subsequent allocations can just extend the range rather than moving the entire block to somewhere else. This wouldn't show up in benchmarks so I haven't added any. However, profiling the same workload as above with this optimization shows a ~60x speedup: