Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big repo causes resource hog hangs #2405

Open
Kubuxu opened this issue Feb 25, 2016 · 8 comments
Open

Big repo causes resource hog hangs #2405

Kubuxu opened this issue Feb 25, 2016 · 8 comments
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@Kubuxu
Copy link
Member

Kubuxu commented Feb 25, 2016

I can't ls a hash from files API, nor name publish it, although I can get object for it.

See: https://ipfs.io/ipns/bin.ipfs.ovh/#QmbXUsz2aYebJvF2GE5i8j3WEP9NhWpQ1p6F6QWRosEyE8.txt

@Kubuxu
Copy link
Member Author

Kubuxu commented Feb 26, 2016

Seems connected with #2407.
ipfs cat [some hash] also hangs using massive amount of disk bandwidth.

@Kubuxu Kubuxu changed the title Hash from Files API hangs during 'ls' and other Big repo causes resource hog hangs Feb 26, 2016
@whyrusleeping
Copy link
Member

could you post a stack dump from the daemon while this hang is happening?

@Kubuxu
Copy link
Member Author

Kubuxu commented Mar 9, 2016

Here is stack trace: https://gist.github.com/Kubuxu/7746e2770650796ab043

Also this is quite interesting:

1d [kubuxu@vs1:~] $ ipfs files ls /
ovh-kubuxu
public
1d [kubuxu@vs1:~] $ ipfs files ls /public
^C
Error: request canceled
1d [kubuxu@vs1:~] 1 $ ipfs files ls /
^C
Error: request canceled
1d [kubuxu@vs1:~] 1 $ 

@whyrusleeping
Copy link
Member

In the debug/vars log you sent me, there were a lot of calls to HasBlock, only 5 actual Get calls, and one call to Get that errored. I just got my raid box back and set up so I can start trying out larger tests like this. I would run this with the log levels raised to info ( ipfs log level all info ) and see if anything interesting shows up there.

@whyrusleeping
Copy link
Member

@Kubuxu is this improved by the bloom filter code?

@whyrusleeping whyrusleeping added this to the Resource Constraints milestone Aug 9, 2016
@Kubuxu
Copy link
Member Author

Kubuxu commented Aug 9, 2016

I traced it to the GC code. Let me find it.

Bloom won't help as what happens we recalculate size of the repo on every call or something ridiculous like that.

@Kubuxu
Copy link
Member Author

Kubuxu commented Aug 9, 2016

Here: #2174

It is slow in case of 250MiB, looks like hang in case of multiple GiB.

At least I think it might be the same issue, I might be wrong.

EDIT: It might be improved by bloom but it shouldn't be a problem in the first case, why are we running so many Has calls when we try to access something.

@whyrusleeping
Copy link
Member

This should be partially resolved, ipfs cat no longer tries to run a gc

@em-ly em-ly added the kind/bug A bug in existing code (including security flaws) label Aug 25, 2016
@Stebalien Stebalien removed this from the Resource Constraints milestone Apr 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

4 participants