-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Restructure the memory pipeline #118
Conversation
Nice work getting rid of these clones! Why does |
e7e7767
to
ca12283
Compare
ca12283
to
f701fab
Compare
@adr1anh I originally did that, but then I didn't want to pollute the |
let W = R1CSWitness::<G>::new(shape, self.aux_assignment())?; | ||
let X = &self.input_assignment()[1..]; | ||
let W = R1CSWitness::<G>::new(shape, self.aux_assignment().to_vec())?; | ||
|
||
let comm_W = W.commit(ck); | ||
|
||
let instance = R1CSInstance::<G>::new(shape, &comm_W, X)?; | ||
let instance = R1CSInstance::<G>::new(shape, comm_W, self.input_assignment().to_vec())?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since two people have asked me about why there are still clone
s here, I want to clarify. This function, r1cs_instance_and_witness
, is no longer being called in prove_step
. This change is purely atheistic -- it moves the .to_vec()
call inside R1CSWitness::new
/R1CSInstance::new
to be explicit on construction -- so the compiler is happy.
We do not call this function, there are no extra clones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good change, and follows C-CALLER-CONTROL
let (u_primary, w_primary) = r1cs::instance_and_witness( | ||
r1cs_primary, | ||
&pp.ck_primary, | ||
input_assignment, | ||
aux_assignment, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new r1cs::instance_and_witness
function eats the inputs instead of cloning.
/// `setup = true`. After the initial step, every next Nova step has a fixed shape, so the buffers in | ||
/// `R1CSWitness` and `R1CSInstance` have the exact capacity they need. To be memory efficient, | ||
/// [`WitnessViewCS`] is flagged as `setup = false` and we no longer allow the buffers to resize. | ||
pub struct WitnessViewCS<'a, Scalar> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be able to use https://github.com/lurk-lab/bellpepper/blob/dev/crates/bellpepper/src/util_cs/witness_cs.rs now, if you want.
let W = R1CSWitness::<G>::new(shape, self.aux_assignment())?; | ||
let X = &self.input_assignment()[1..]; | ||
let W = R1CSWitness::<G>::new(shape, self.aux_assignment().to_vec())?; | ||
|
||
let comm_W = W.commit(ck); | ||
|
||
let instance = R1CSInstance::<G>::new(shape, &comm_W, X)?; | ||
let instance = R1CSInstance::<G>::new(shape, comm_W, self.input_assignment().to_vec())?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good change, and follows C-CALLER-CONTROL
self.multiply_witness_unchecked(W, u_and_X) | ||
} | ||
|
||
/// Multiply by a witness representing a dense vector; uses rayon/gpu. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure this uses the GPU yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a mis-comment; it indeed doesn't use GPU
let mut W = vec![G::Scalar::ZERO; S.num_vars]; | ||
W.shrink_to_fit(); | ||
let mut E = vec![G::Scalar::ZERO; S.num_cons]; | ||
E.shrink_to_fit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the capacity allocated by the vec!
macro? how does that compare to the length of the vector created by the vec!
macro?
Given the answers to these questions, what is the effect of the call to shrink_to_fit()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, I couldn't comfirm from the vec!
documentation that vec![x; n]
initializes with_capacity(n)
. So I just redundantly called shrink_to_fit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, I couldn't comfirm from the vec! documentation that vec![x; n] initializes with_capacity(n)
Why? This is the part of the documentation where this is confirmed.
Alternatively, a quick test might have helped answer your question using the capcity method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I just tested and indeed there's no need for shrink_to_fit
pub fn instance_and_witness<G: Group>( | ||
shape: &R1CSShape<G>, | ||
ck: &CommitmentKey<G>, | ||
input_assignment: Vec<G::Scalar>, | ||
aux_assignment: Vec<G::Scalar>, | ||
) -> Result<(R1CSInstance<G>, R1CSWitness<G>), NovaError> { | ||
let W = R1CSWitness::<G>::new(shape, aux_assignment)?; | ||
let comm_W = W.commit(ck); | ||
let instance = R1CSInstance::<G>::new(shape, comm_W, input_assignment)?; | ||
|
||
Ok((instance, W)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The refactor of r1cs_instance_and_witness in #121 should supersede this.
Closed after #137 |
This PR restructures the way
arecibo
approaches memory allocation and restructures the entire memory pipeline. See the notion design doc for more info.Notable improvements
prove_step
andR1CSShape::commit_T
no longer clone the large witness.lurk-rs
, this PR fixes the strange regression we observed when using loaded public parameters. This is a strong indicator that inefficient memory allocations inarecibo
was creating (and could've created) very unpredictable performance regressions.To-dos and other outstanding issues
arecibo
, and left the SuperNova untouched. This contains the scope of the PR. In the future, we should be interested in converting SuperNova to the same memory strategy as well.ResourceSink
structure was temporarily created to manage the extra buffersprove_step
needs. This should be integrated into theRecursiveSNARK
API. Maybe with some sort ofRecursiveSNARKEngine
/RecursiveSNARKEngineTrait
to manage folding.commit_T
. TheZ
vectors should be moved into theResourceSink
to de-duplicate this.l_w_primary
andl_u_primary
intoRecursiveSNARK
, pointing out the revisiting Nova paper. I thinkResourceSink
fixes this, but we should make sure.WitnessCS
in upstreambellpepper
to unify withWitnessViewCS
, which is redundant.