-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fs: get inodes and disk usage via pure go #2171
Conversation
Hi @namreg. Thanks for your PR. I'm waiting for a google or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
79af676
to
4b680b3
Compare
/ok-to-test |
How much data is in the directory it is getting disk and inodes on? |
@dashpole Do you mean how much data in the directory when benchmarking? To be honest, I have ran it in a small directory that contains ~10 items. Also, I ran benchmark only for
Benchmarks is running on the |
Can you try it on a large directory? Say 10 GB? Thanks a bunch for running these experiments. |
The best data to use is probably a |
@dashpole @euank So, I've tried to run the benchmark on the one of the kubernetes node in the directory
As we can see, new implementation is still faster, but it consumes a lot more memory. I believe this happens because go runtime does not track memory for processes that were spawned by Nevertheless, here are
|
I am going to run our eviction tests with the PR as another data point. You mentioned at the beginning: "both |
eviction tests are all good. It looks like the latency on du and find is about the same as this implementation in the inode eviction test (lots of small files). |
4b680b3
to
353322f
Compare
Yeah, I have deployed this patch into a kubernetes cluster. As a result, number of the audit log items drastically decreased. It turns out, |
Ah, I get it now. It was the calls themselves, not the |
@dashpole yeah, exactly :) Does anything else concern you in this PR? |
353322f
to
2739586
Compare
2739586
to
046818d
Compare
/retest |
@dashpole is there anything else I should fix? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
By moving this into the cadvisor process, we actually lose a couple of things:
Many monitoring solutions fork off child processes to do these tasks for these reasons. I like that this is moving to pure-go, but I think it should still fork and alter the niceness values with syscalls. |
Hello there,
I would like to go back to the PR have been making almost two years ago. (this one #1576). @euank proposed to get rid of
find
command. I took as a basis his PR and also got rid ofdu
command. Now we collect both disk space and inode stats at the same time.Why it matters for us: both
du
andfind
overwhelme our audit log.Also, I have benchmarked
du
command and pure go implementation and got the following results:As we can see, pure go implementation is ~20x faster.