-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase copy timeout #548
Conversation
Codecov Report
@@ Coverage Diff @@
## main #548 +/- ##
=============================================
Coverage 55.02793% 55.02793%
=============================================
Files 45 45
Lines 3580 3580
=============================================
Hits 1970 1970
Misses 1450 1450
Partials 160 160
Continue to review full report in Codecov by Sentry.
|
"github.com/livepeer/catalyst-api/config" | ||
xerrors "github.com/livepeer/catalyst-api/errors" | ||
"github.com/livepeer/catalyst-api/log" | ||
"github.com/livepeer/catalyst-api/video" | ||
"github.com/livepeer/go-tools/drivers" | ||
) | ||
|
||
const MAX_COPY_FILE_DURATION = 30 * time.Minute | ||
const MaxCopyFileDuration = 2 * time.Hour |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thomshutt I'm going to up this even more I think, i guess we should be basing it on supporting up to 30GiB which is our set maximum? In which case it might need to go up to 6-8 hours 😬
I tried a few different gateways and was only able to get about 1.5MB/s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let's up this to something reasonable for pulling from Google Storage / S3 / a decent HTTP gateway - I think we can include a disclaimer that you shouldn't try and ingest a 40Gb file from IPFS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah.. since GCS/S3 are so much faster I think sticking with 2 hours is ok. That allows for ~4MB/s to download 30GB, which is still very slow.
This is to improve the chances of us being able to download >2GiB files from IPFS.
Moved retryableHttpClient to input_copy.go since it was only used there.