Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Item status check: error on maximum item size exceedance and test with specific identifier/access key #485

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

JustAnotherArchivist
Copy link
Contributor

Fixes #293

@JustAnotherArchivist
Copy link
Contributor Author

Also replaces the abandoned PR #297

@jjjake
Copy link
Owner

jjjake commented Feb 14, 2022

Thanks again @JustAnotherArchivist!

This looks good, but I'm actually going to look into what it'd take to get item_size/files_count limit info from s3.us.archive.org rather than hard-coding it here. I'll keep you posted.

print(f'warning: {args["<identifier>"]} is over limit, and not accepting requests. '
'Expect 503 SlowDown errors.',
file=sys.stderr)
sys.exit(1)
elif item.item_size >= MAX_ITEM_SIZE:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
elif item.item_size >= MAX_ITEM_SIZE:
elif item.item_size > MAX_ITEM_SIZE:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would require some testing whether IA's servers still accept any upload (including an empty file) if the item is exactly 1 TiB. Might be tricky though since I think the metadata files, which get modified after every upload, also count towards the item size.

@@ -160,19 +163,22 @@ def main(argv, session):
sys.exit(1)

# Status check.
if args['<identifier>']:
item = session.get_item(args['<identifier>'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want to get an item that could be 1TB or more before we do a status-check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we need the Item object for both the size status check and the actual upload. While this structure means we needlessly fetch the item metadata when S3 is overloaded, it avoids more complicated conditions (e.g. first run the S3 overload check if --status-check is present, then fetch item metadata if the identifier is present, then check the item size if both are present, then exit successfully if --status-check is present), which in my opinion leads to less readable code. Alternatives are two lines with get_item calls, which is just as ugly, or some sort of lazy evaluation, which is somewhat complicated to implement. So I found this to be the least awkward solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

upload command's status check flag doesn't check if item is full
3 participants