-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Upload speed drops while uploading 1k files in a folder #5061
Comments
I can confirm this locally, uploading 1000 4Kb binary files with testpilot client 3.0.0.9006-beta1. no search service running.
⛏️ 👀 |
unfortunately, no ListFolder is involved ... |
hmmm might be size aggregation 🤔 |
to calculate the folder size we
there might be a small optimization, but I have to check if it has a real impact. meh jaeger is limited to 1500 traces I'll submit a PR that slightly improves calculating the folder size by using |
that is just a small improvement ... AFAIR we made the upload finish async in euperimental ... so the problem should not arise there ... in line 264 we could make postprocessing async in a go routine. at least the size aggregation can be done async |
other options to address the calculatetreesize performance: lock parentinstead of summing up all children lock the parent, read the treesize, add the size of the uploaded file, write pack the treesize, unlock the parent. This is an improvement that can be done independend of the below async propagation, but special care needs to be taken to make sure we do not loose a file upload because the parent was locked and we did not wailt long enough. scary. anyway, we should have a cli tool to recalculate the treesize of a space anyway. fully async upload with upload journalon experimental we already have async postprocessing
we could use the tus upload info as a journal as it already keeps track of uploads, but we would have to defer deletion of the upload info until we finished propagation. currently, the file is deleted before we do the propagation. |
before we can propagate the size diff we first have to be able to atomically calculate it based on the previous version and the current file. While looking into thet we found some corner cases and had to rewrite the finish upload code. |
Describe the bug
During upload of 1k files to a directory, performance drops heavily. Upload 1k files to a new directory is initially faster, then drops again. All where done in a single sync run with the 3.0-prerelease desktop client.
Steps to reproduce
Steps to reproduce the behavior:
for n in {0..999}; do dd if=/dev/urandom of=random$( printf %03d "$n" ).file bs=1 count=$(( RANDOM + 1024 )); done
Expected behavior
Upload of 1k small files shouldn't slow down.
Actual behavior
During upload of 1k files to a directory, performance drops heavily.
Setup
oCIS Server:
https://ocis.owncloud.com/
Desktop Client
Desktop was started this way:
Desktop client was running on Mac mini in Hetzner datacenter. (1 Gbit/s)
Additional context
Desktop log uploaded here:
The text was updated successfully, but these errors were encountered: