-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Object not found" updating registry #4007
Comments
Hm is this persistent? In the sense does it reproduce if you try again? Also, if you blow away |
Removing the registry seems to have some effect. |
How unusual! You wouldn't happen to have any proxies or Cargo configuration to try to point to another github repo would you? I checked and that id definitely exists in the index... |
Once I managed to wipe the right cargo directory it managed to update correctly, sorry for the shadow edit :) |
I just had the same really weird behavior:
|
Weirdly, I'm suddenly seeing this as well:
...but I can hit the url, no problem:
|
(ah, as per #4245, |
I have this
this solution worked but the problem is mostly that its such a weird error 😕 |
Chiming in with the same issue, also fixed by Cargo version (though it was also happening on a freshly updated nightly, as well my previous version which I think was 1.31) |
Problem
Notes |
Same issue in v1.41.0 & Linux. Clearing the registry did fix it: Updating crates.io index
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
object not found - no match for id (05707ea64ba70866ac7211b5fde456a239e21f55); class=Odb (9); code=NotFound (-3) |
Same issue here: v1.41.1 on Darwin (OSX Catalina), but unfortunately clearing out |
If anyone hits this problem, can you compress |
https://github.com/rust-lang/crates.io-index
@Cache-miss That looks like a basic network error. This issue is for the If you are consistently having that error, I recommend opening a new issue. You can also try net.git-fetch-with-cli as a workaround. |
Same issue here: v 1.42 |
This happens consistently, it's not connected with networking. As in here
Vs here
I've started the investigation back in mid 1.41 and the error message was a bit different, so some details from there if you will: I've managed to repro it in docker against a copy of the corrupt cache
I got the git command which cargo runs under the hood:
Skipping some steps with Unfortunately, I don't have much time for another investigation and will do a workaround instead. |
Is it possible that the branches may be run in parallel? I know that git can corrupt itself when more than one operation is run on a file repo at the same time. Cargo has a file lock to avoid that (and other similar things) but maybe it has bugs. Thanks to the change log, there was even a bug fixed in it in 1.41 (#7602) |
@ehuss we're lucky today! Here's is your corrupted cache and steps to repro:
|
It's rare, but possible, yes. However, before 1.41 it was never an issue.I've witnessed many times that the job is waiting for a file lock before starting download. |
Now, when I stopped using shared cache between the branches (which used to work perfectly) and had to stop pre-populating cache for the new branches, CI obviously became less productive and returned to download/unzip the same dependencies again and again. Any progress on the matter? |
Ok, this is getting really annoying, today I was asked to clean cargo cache more than 8 times. Even made a way for devs to clean the cache on their own. Can anyone help with it? |
@TriplEight Several of the error messages you posted seem unrelated to this issue. Are you seeing I'm pretty unfamiliar with gitlab CI, and particularly how it handles caching. You might want to check that the filesystem supports the style of locking Cargo uses ( |
version:
cargo 0.19.0-nightly (fa7584c14 2017-04-26)
The text was updated successfully, but these errors were encountered: