-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix unbounded vec deserialization #592
Conversation
Thanks for the contribution! Does this fix the issue for any hand-crafted message you tested? I have a branch with panic catches around calls to |
I've tried with different vec sizes and when the size exceeds 1GB it will directly err out without crashing. I guess you can't catch OOM errors on allocations as it doesn't print backtraces even with the env param. I guess the best way is to not try to alloc that large memory. |
Agreed, a fixed boundary is the safest approach and this change looks fine; I'll leave the decision regarding the maximum size to @howardwu. |
Are we able to expand the maximum capacity beyond 1GB? While 1GB may seem large, it is small for our universal setup ceremony, which would need at least 10GB (if not more) for this data structure (IIRC). Can you clarify what case is introducing a failure at 1GB in snarkOS? |
@howardwu In the serialized proof, the size of each There might still be different ways to handle this if imposing size limit is not that easy:
Still, being able to accept arbitrary input from network with an unchecked size is not safe and should be carefully considered. |
This will work, as long as we still do expect to read a length (just not pre-allocate based on it). We could also use something like Vec::try_reserve (which would bump the MSRV to 1.57).
This would probably work best, even if it requires some tinkering around the overall serialization setup. |
Special deserialization for proofs makes sense to me, as the number of elements in the (commitment) vec is known statically: https://github.com/AleoHQ/snarkVM/blob/435f1120b15d0d63944b9935667b607084b83cef/marlin/src/ahp/ahp.rs#L63. We can do a similar analysis for the other collections used in the proof, like those used for evaluations. |
Considering the proof is always 771 bytes on network, I think the whole structure is statically known. Feel free to supersede this PR as I'm not sure how to correctly make a special deserializer for the proof. I'd recommend to put it on high priority though, as currently it's still possible to crash every node on the network with a specially crafted payload. |
Until proof-specific deserialization is implemented, I proposed a more generic solution in https://github.com/AleoHQ/snarkVM/pull/609. |
Closing as the issue has been addressed in #735 |
Motivation
The
Vec
deserialization procedure ofCanonicalDeserialize
didn't check if the vector size is reasonable which causes https://github.com/AleoHQ/snarkOS/issues/1534.This PR limits the data size to 1GB without counting
Vec
overhead which should be enough (?).Edit: looks like
Transactions
,Transition
andEvent
are safe.Test Plan
Currently the tests of
CanonicalSerialize
andCanonicalDeserialize
only checks if the data is intact after a se/des cycle so didn't add any test there.I'm using a fake client to send bad proof data to test it. It seems there is no significant memory usage changes when I rapidly trigger a fairly large allocation (several hundred MBs).
Related PRs
(Link your related PRs here)