You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
If you link a lib built with VS2017 into your client code built with vs2019, you may experience potential memory corruption.
The direct cause/symptom is triggered by _Big_allocation_threshold in , where we force allocate more memory and align the block to 32bytes when allocating more than 4KB. The real address returned by underlying allocator is stored 8 bytes before the address returned to caller.
When deallocating, _Deallocate(void* _Ptr, const size_t _Bytes) in will restore the above mentioned address from the address passed in by caller, if it sees the size to be deallocated is above _Big_allocation_threshold.
This logic relies heavy on the correctness of memory size passed into _Deallocate.
in vs2017, the std::_Hash implement its hash table with a std::vector, whereas this has been heavily refactored in VS2019, using _Hash_vec, which seems to be a specialized vector whose size() is always capacity(). Based on this strong assumption, it uses size() as the memory size parameter to call _Deallocate, when a memory deallocation happens.
Now, if you mix libs (built with 2017) and calling code (built with 2019) together, when codes in libs manipulate unorderd_map, it potential breaks the assumption in the newer version of std::_Hash, as in case of std::vector, there is no such constraint size() == capacity(). When the unorderd_map (or any thing based on std::_Hash ) is destructed in calling code (built with vs2019), we could potentially pass misaligned pointer into underlying memory allocator, causing either a crash or a subtle memory corruption ( in my case a very hard to reproduce bug).
The problem can be subtle to reproduce, as you have to keep adding more elements into unordered_map till you reach _Big_allocation_threshold, then remove some keys so that the assumed memory size returned by size() of _Vec in xhash is below _Big_allocation_threshold , then you have to destruct the unorderd_map in client code to trigger it.
I hope I have explained as clearly as possible the issue. I think this worth a proper fix. However I cannot think of one right off the top of my head.
In a real world, it's not always possible to control which toolchain a third-party library uses, and the potential harm of this issue can be huge.
Expected behavior
The newer std::_Hash needs to consider this edge case if possible.
Thanks for reporting and investigating this issue. We'll need to look into this further - I am uncertain whether it's possible to fix this without wreaking further ABI havoc.
Describe the bug
If you link a lib built with VS2017 into your client code built with vs2019, you may experience potential memory corruption.
I hope I have explained as clearly as possible the issue. I think this worth a proper fix. However I cannot think of one right off the top of my head.
In a real world, it's not always possible to control which toolchain a third-party library uses, and the potential harm of this issue can be huge.
Expected behavior
The newer std::_Hash needs to consider this edge case if possible.
STL version
Visual Studio Community 14.16.27023
This is also DevCom-1190124/VSO-1220461/AB#1220461
The text was updated successfully, but these errors were encountered: