-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cargo build
breaks with "resource temporarily unavailable (error 35)" (I'm assuming EAGAIN) on v2.2.0 and v2.2.2
#809
Comments
It can return You could try with clonefile disable, and see if the issue still occurs. |
That makes sense, thank you. After some searching I cannot seem to find how I would disable Is there a dataset or zpool option to disable support for Is this issue with Thank you |
Looks like we should pull in |
Falling back to hard_link when that happens, retrying can lead to very long wait before copying works (up to 4secs in my tests) while hard_linking works straight away. Looks related to openzfsonosx/zfs#809 Closes rust-lang#13838
Falling back to hard_link when that happens, retrying can lead to very long wait before copying works (up to 4secs in my tests) while hard_linking works straight away. Looks related to openzfsonosx/zfs#809 Closes rust-lang#13838
Falling back to hard_link when that happens, retrying can lead to very long wait before copying works (up to 4secs in my tests) while hard_linking works straight away. Looks related to openzfsonosx/zfs#809 Closes rust-lang#13838
Running
cargo build
on a large Rust project givesresource temporarily unavailable (error 35)
when cargo attempts to link files between subdirectories ofbuild
.Bug visible on:
Bug does not recur on (
M1 Max
,Ventura
,v2.1.6
); neither with a "vanilla" (defaults) pool+dataset nor with the following options:I could not find a similar bug upstream, and I'm unsure if this is the right place to report this.
Unfortunately, this came up while commissioning an M1 Max machine (my first) and the above is from my notes, since the machine now works. If this is not a known bug however, and can't easily be reproduced with the above, please feed back and I will set up some VMs (or find a spare M1 machine) to reproduce.
The text was updated successfully, but these errors were encountered: