-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cargo build hangs #88862
Comments
cc @jackh726 |
One way to minimize could be to see which crate in your workspace is triggering the hang. E.g., run |
The crate/step which is failing is the final one. I managed to delete parts of the code to make the hang disappear but I'm not certain I've actually pinpointed it exactly yet. I let it run and seems it failed on |
ran with
and the build completed after a while. The memory usage jumps up dramatically, from 138 MB to 7GB to 20GB, and then down again to repeat. Same behaviour without those flags. This is probably not a hang, just increased complexity causing longer run-time. EDIT: On older nightly, the max memory usage is around 1.5GB |
Managed to minimize it somewhat. Was not able to remove all dependencies but most of them.
takes 40-45 seconds to compile from clean state.
takes 125-500 seconds to compile from clean state. (and uses 5-7GB of memory sometimes) I believe the issue is to do with the client traits and impls and tracing instrumentation. Replacing https://gist.github.com/Emilgardis/194812fa2e73ff6839d9942163329887 |
@rustbot ping cleanup I think it would be useful to further minimize the above Gist. |
Hey Cleanup Crew ICE-breakers! This bug has been identified as a good cc @AminArria @camelid @chrissimpkins @contrun @DutchGhost @elshize @h-michael @HallerPatrick @hdhoang @hellow554 @henryboisdequin @imtsuki @JamesPatrickGill @kanru @KarlK90 @MAdrianMattocks @matheus-consoli @mental32 @nmccarty @Noah-Kennedy @pard68 @PeytonT @pierreN @Redblueflame @RobbieClarken @RobertoSnap @robjtede @SarthakSingh31 @shekohex @sinato @smmalis37 @steffahn @Stupremee @tamuhey @turboladen @woshilapin @yerke |
I took a profile of a debug build of rustc building that gist. This definitely doesn't seem to be a hang per se, there's always some work going on, it's just taking a really really long time. There's no one specific function we're spending all our time in either, its more like death by a thousand papercuts. Although we are spending about 25% of the time in hashmap-related code, but I imagine that's normal for rustc? |
Hmm, I haven't looked at rustc profiles before, but 25% in hashmap-related code (I assume you mean code in the hashmap library— |
No, 25% may even be low. rustc is very HashMap intensive in most profiles. |
diff of Also some flamegraphs: before, after can publish the raw profdata if wanted |
I'm confused as to why there is no analysis in the flamegraph of after. |
Hi all. Thanks a ton for the help on this. Especially @Emilgardis It's definitely helpful to see that the majority of the time is spent during the codegen phase. Definitely I think getting a more minimized repro will help me get a fix for this. |
That's gonna be tricky. Even just removing a few of the calls to tracing, something that I'd expect to have no side effects, would drastically reduce the compilation time. Same for deleting code that gets linted as unreachable. |
Perhaps if you expand the macros (using something like |
It's probably not so much total compile time, as much as time spent in |
One thing I just spotted in my profiling is lots of stacks of collect_items_rec that are over 50 recursive calls deep. Is that expected? Is it worthwhile/possible to change this method to be iterative instead of recursive? |
Managed to get it quite small, but still directly depends on removing this L31-L33 and here L-48-50 makes the memory footprint smaller. removing Thanks @camelid for the hint about using expanded, I did do that before but didn't think to remove parts of tracing. |
Managed to make it 0-dep 🎉 Compiles on the playground on stable, timeout/sigkill on current playground nightly (2021-09-14) and beta ofc Note: Can still be minimized. Just noticed for example that check_conn can just be an edit: reduced more, noticeable better compiler performance though https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=90840273ed63d0a3c2b39596b77d396f |
For future endeavors like this, I think it'd be very useful to develop a fuzz-like tool that in a working compiler:
That was basically the workflow I used to get to this. |
It can't inline dependencies from other crates, but there's |
It was not helpful when I tried it in this case, maybe it'll work better now that's it's all inlined. |
@rustbot label +I-compiletime +I-compilemem -I-hang |
I only get timeouts with beta/nightly if I use the default ''Run'', but not if I choose ''Build''. With Build and nightly I get:
for the first link and
for the second. With stable the times are 0.82s and and 4.83s but the timings vary widely on playground that this does not help much. Why does Run time out only for beta/nightly while Build works for all? |
The regressed commit is currently not on stable, only beta and nightly, it will be promoted to stable soon. Hopefully if a fix can be found it can be beta backported |
@hkratz Not sure if this is helpful, but I was able to reproduce the latest minimal playground locally by making a new bin crate and compiling Eventually the logs get to:
Where the last line repeats thousands and thousands of times |
Last one I think, the more I minimize it, the quicker it compiles (but still slower than 2021-08-25). |
Assigning priority as discussed in the Zulip thread of the Prioritization Working Group. @rustbot label -I-prioritize +P-medium |
status update: I've been trying to look into this. Given the complexity of the MCVE, it's very difficult. When running the MCVE with A couple things to note:
|
Turns out, this has some really bad perf implications in large types (issue rust-lang#88862). While we technically can handle them fine, it doesn't change test output either way. For now, revert with an added benchmark. Future attempts to change this back will have to consider perf.
It's a bit unfortunate I wasn't able to find a "real" fix, but as Niko said, a decent stopgap measure. |
Turns out, this has some really bad perf implications in large types (issue rust-lang#88862). While we technically can handle them fine, it doesn't change test output either way. For now, revert with an added benchmark. Future attempts to change this back will have to consider perf.
Don't normalize opaque types with escaping late-bound regions Fixes rust-lang#88862 Turns out, this has some really bad perf implications in large types (issue rust-lang#88862). While we technically can handle them fine, it doesn't change test output either way. For now, revert with an added benchmark. Future attempts to change this back will have to consider perf. Needs a perf run once rust-lang/rustc-perf#1033 is merged r? `@nikomatsakis`
Looks like the stopgap measure is causing problems... at least if the analysis in rust-lang/miri#2433 is correct, the 'fix' is causing ICEs in Miri when rustc is built with debug assertions. |
…ound-vars, r=lcnr Normalize opaques w/ bound vars First, we reenable normalization of opaque types with escaping late bound regions to fix rust-lang/miri#2433. This essentially reverts rust-lang#89285. Second, we mitigate the perf regression found in rust-lang#88862 by simplifying the way that we relate (sub and eq) GeneratorWitness types. This relies on the fact that we construct these GeneratorWitness types somewhat particularly (with all free regions found in the witness types replaced with late bound regions) -- but those bound regions really should be treated as existential regions, not universal ones. Those two facts leads me to believe that we do not need to use the full `higher_ranked_sub` machinery to relate two generator witnesses. I'm pretty confident that this is correct, but I'm glad to discuss this further.
originally posted in zulip
Hi! I've found a regression in compilation introduced in #85499,
searched nightlies: from nightly-2021-08-17 to nightly-2021-09-10
regressed nightly: nightly-2021-08-26
searched commits: from b03ccac to 0afc208
regressed commit: 0afc208
bisected with cargo-bisect-rustc v0.6.0
Host triple: x86_64-pc-windows-msvc
Reproduce with:
cargo bisect-rustc --start=2021-08-17 --script .\bisect-script.bat
The issue is that compilation succeeds in a acceptable time before this change, but now it's seemingly taking forever, I have not tried this on linux, only windows so far. I'm not sure how I should report this because I'm not able to reproduce it outside my large workspace where I bisected this. Does anyone have an idea to what this could be or have a good way to figure out what the cause is?
some notable dependencies: actix-web, sqlx, serde, chrono, ring, tokio, tracing, reqwest,anyhow,eyre,async-trait and many more
I've tried
which outputs at the end
and then just hangs.
How do I proceed?
The text was updated successfully, but these errors were encountered: