-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: value quota based idl decoder limiting #4657
Conversation
…ug); modify skip_any_vec to do limit checks
One thing that I am unsure or I have probably missed (sorry for the potentially stupid question): Does the metering limit also need to be checked on ordinary decoding of Candid when it is not skipped and not recursively called? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice PR!
I have only some minor comments and one question (probably not relevant).
Actually, I'm not sure I understand the question (which worries me). I think all calls, initial, skip and recursive should be doing the check, but I maybe you've seen something I haven't. |
@luc-blaeser PTAL (and thanks for the review!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we document the cost model somewhere, so that people can roughly know how to tune the parameters?
rts/motoko-rts/src/idl.rs
Outdated
@@ -314,6 +319,7 @@ unsafe fn skip_any_vec(buf: *mut Buf, typtbl: *mut *mut u8, t: i32, count: u32) | |||
// makes no progress. No point in calling it over and over again. | |||
// (This is easier to detect this way than by analyzing the type table, | |||
// where we’d have to chase single-field-records.) | |||
idl_limit_check((count - 1) as u64); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can count
be 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because we return on previous line 311 when count == 0
Not that I have any concrete concern, I was just curious whether I got all aspects.
|
Yes, that's the idea.
Fortunately, the blob decoder first checks that the blob does not exceed the length of the candid payload, which is limited to 10MB by the IC (in the worst case). The Nat and Int decoders also check they don't decode past the candid payload, so I was hoping that would limit the allocation.
Hmm, for arrays, maybe one could decrement the quota by number of elements before allocation to trigger then check, and then bulk-increment the quota before deserializing the element (to compensate for the earlier bulk check). Not sure.
|
|
Simplifies #4624 to a simple linear limit on the number of decoded values as a function of decoded payload size,
instead of using two linear functions on perfcounter (simulated or real) and allocation counter.
The function is:
value_quota(blob) : Nat64 = blob.size() * (numerator/denominator) + bias
where blob is the candid blob to be decoded, and
numerator
(default 1),denominator
(default 1) andbias
(default 1024) areNat32s
.Much simpler than #4624 and doesn't depend on vagaries of instruction metering and byte allocation which varies with gc and compiler options, but is it good enough?
The constants can be (globally) modified/inspected using prims (Prim.getCandidLimits/Prim.setCandidLimits) which will need to get exposed in base eventually.
The quota is decremented on every call to deserialise or skip a value in vanilla candid mode (destabilization is not metered).
The quota is eagerly checked before deserializing or skipping arrays.
One possible refinement would be to combine the value quota with a memory quota (though the latter would still vary with gc flavour and perhaps word-size unless we count logical words)