-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exponential compile-time and type_length_limit blowup when nesting closure wrappers #54540
Comments
And another thing I've noticed. |
So well, type_length_limit is a monomorphization time thing and it's used for symbol name (lengths). It's no wonder that the blowup happened because you actually raised the type_length_limit. As type aliases are identical to the original type, we can't generate things like debug symbols of a reasonable length in this case. We have to combine both caching and digesting to make the symbol shorter. |
How is it possible then that if I don't call the And I still don't think the compiler would take 30 seconds to create 20 or so symbol names, each 6 MB long. What I'm trying to say ‒ I understand than if the bigger limit is needed, it takes longer. But I think the bigger limit should not be needed here. Or, at least, I don't see why it should be. |
EDIT: I likely was wrong, see below. |
Triage, Pending Pre-RFC that aims to solve this: https://internals.rust-lang.org/t/pre-rfc-a-new-symbol-mangling-scheme |
After working on the parts of rustc relevant to this issue, I'm pretty sure none of those types are getting into symbol names, but rather this is
The reason closures trigger this is because We can trigger the (click to open)#![type_length_limit="8388607"]
pub const MAIN: fn() = <S21 as Exp<()>>::exp as fn();
trait Exp<X> {
fn exp();
}
impl<X> Exp<X> for () {
fn exp() {}
}
impl<T: Exp<(X, X)>, X> Exp<X> for (T,) {
fn exp() {
<T as Exp<(X, X)>>::exp as fn();
}
}
type S<T> = (T,);
type S0 = S<()>;
type S1 = S<S0>;
type S2 = S<S1>;
type S3 = S<S2>;
type S4 = S<S3>;
type S5 = S<S4>;
type S6 = S<S5>;
type S7 = S<S6>;
type S8 = S<S7>;
type S9 = S<S8>;
type S10 = S<S9>;
type S11 = S<S10>;
type S12 = S<S11>;
type S13 = S<S12>;
type S14 = S<S13>;
type S15 = S<S14>;
type S16 = S<S15>;
type S17 = S<S16>;
type S18 = S<S17>;
type S19 = S<S18>;
type S20 = S<S19>;
type S21 = S<S20>; Note that a downside of using What can you (@vorner) do?Not much, I'm afraid you can't avoid this issue, if you need a closure with a capture. What can we do?For For closure captures, maybe they should refer to their parameters abstractly, so we never have to substitute captures (until we need to e.g. compute layout)? Or even keep captures outside of the type itself (had we done this and found it inefficient?) I'm nominating this issue for discussion at the next compiler team meeting, wrt the last two points above ( |
triage: tagging as P-medium, under assumption that the exponential time blowup here is only observable if the user "opts in" via If the prior assumption is invalid, feel free to challenge that priority assignment. |
assigning to self to try to resolve this, hopefully with @eddyb input. Un-nominating. |
i'm also seeing this issue using closures with futures where are there any tricks that might avoid the issue in this case, or even, any useful way of tracking down where the the compiler is struggling in a large async application? |
@ryankurte There's not much you can do, the compiler is most likely struggling to compute the unnecessarily large
|
thanks for the |
I ended up Box'ing most futures instead of |
|
Sounds like #72412 might help here |
To repeat a comment I made on #64496: @eddyb has said we should just get rid of this check. @nikomatsakis I'm curious to hear your thoughts on that. Personally I'm not sure; it seems like this can point to areas where compile time etc. would be improved by introducing a boxed future, but it doesn't necessarily do a good job of that. For reference, here's a change to raise all the limits we needed to in Fuchsia. Some of the limits are quite high (the highest I see is over 18 million). I will say that all of these are in related parts of the code, which tells me there might be a common denominator. I'm not aware of an easy way of finding it if there is, though. And people working in that part of the code will be left with juggling these arbitrary-seeming limits. The compile fails immediately on hitting one of these errors, and the offending type may not be the biggest one in the compile, creating the really unfortunate experience of updating the limit only to have it fail again. At one point I just started increasing it myself by arbitrary amounts over what the compiler suggested. |
The compile-time aspect of this was fixed in #72412. |
(And the type length limit checks are being worked on in #76772.) |
In case of deltachat the Why didn't crater catch this bug, does this mean |
Ah sorry for the confusion, we only walk each unique type once when measuring the type length now, so it will have an effect on that too. |
Confirmed that current nightly doesn't require the type length limit increase for Quinn anymore. |
Some examples no longer build after the following commits, set a bigger-than-default type_length_limit to let tests pass. The exceptions are not necessary on nightly and can be removed again after rust-lang/rust#54540 is fixed.
Given discussion above, closing and tracking future issues in #83031. |
I think this is better explained by a code snippet:
What it does is it creates a closure on each level, that wraps another received closure. With the number of levels the compile time and the needed
type_length_limit
grows exponentially.However, if I change the two calls to:
then the blow-up goes away and it compiles almost instantaneously (I tried it up to 120 levels, without increasing the type length limit), so nesting the
S
structures like this is OK. But if I change only one of the calls (or even remove it) and leave just one call to the wrapper, the blowup is still there. However, there seems to be nothing around the wrapper that should make it exponential ‒ it's contains just one call to the wrapped f, has no captures, and if I look at the code, I still count only one distinct closure on each level.The times I've measured with different number of levels:
This is happening both on stable and nightly:
@ishitatsuyuki mentioned at IRC that he would want to have a look.
The text was updated successfully, but these errors were encountered: