Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: upload DB dump to AWS S3 #10863

Merged
merged 3 commits into from
Oct 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions conf/nginx/sites-available/off
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,29 @@ server {
gunzip on;
}

# Add an HTTP 302 redirect to AWS S3 bucket for specific dump files
location = /data/openfoodfacts_recent_changes.jsonl.gz {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/openfoodfacts_recent_changes.jsonl.gz;
}
location = /data/openfoodfacts-mongodbdump.gz {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/openfoodfacts-mongodbdump.gz;
}
location = /data/openfoodfacts-products.jsonl.gz {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/openfoodfacts-products.jsonl.gz;
}
location = /data/en.openfoodfacts.org.products.csv {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/en.openfoodfacts.org.products.csv;
}
location = /data/en.openfoodfacts.org.products.csv.gz {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/en.openfoodfacts.org.products.csv.gz;
}
location = /data/fr.openfoodfacts.org.products.csv {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/fr.openfoodfacts.org.products.csv;
}
location = /data/fr.openfoodfacts.org.products.csv.gz {
return 302 https://openfoodfacts-ds.s3.eu-west-3.amazonaws.com/fr.openfoodfacts.org.products.csv.gz;
}

if ($http_referer ~* (jobothoniel.com) ) { return 403; } # blocked since 2021-07-13

# the app requests /1.json to get the product count...
Expand Down
10 changes: 10 additions & 0 deletions scripts/gen_feeds_daily_off.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,16 @@ for export in en.openfoodfacts.org.products.csv fr.openfoodfacts.org.products.cs
mv -f new.$export.gz $export.gz
done

# Copy CSV and RDF files to AWS S3 using MinIO client
mc cp \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of those files are quite big and could take a long time to upload. What happens in the mean time? Is there a temporary file on s3, and the existing file is replaced once the full file has been received, or someone could download a file that is only partially uploaded?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, why not doing it in the background to let the script continue, adding a & at the end of the command?

If a command is terminated by the control operator &, the shell executes the command in the background in a subshell.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CharlesNepote the problem is that if you use an '&' it's harder to know there has been an error. If we do this we have to join all children at the end and ensure they ended up correctly (or fail).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CSV+RDF files took 5 minutes to upload (Total: 30.18 GiB, Transferred: 30.18 GiB, Speed: 107.80 MiB/s)
Other dumps took 3 minutes to upload (Total: 16.99 GiB, Transferred: 16.99 GiB, Speed: 106.60 MiB/s)

From what I can see in the doc (and what I've observed during upload), PUT operations on an existing key are atomic: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel

So either the file is fully uploaded and it's available, or the old version is served.

en.openfoodfacts.org.products.csv \
en.openfoodfacts.org.products.csv.gz \
en.openfoodfacts.org.products.rdf \
fr.openfoodfacts.org.products.csv \
fr.openfoodfacts.org.products.csv.gz \
fr.openfoodfacts.org.products.rdf \
s3/openfoodfacts-ds

# Generate the MongoDB dumps and jsonl export
cd /srv/off/scripts

Expand Down
7 changes: 7 additions & 0 deletions scripts/mongodb_dump.sh
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,11 @@ popd > /dev/null # data/delta
mongoexport --collection recent_changes --host $HOST --db $DB --fields=_id,comment,code,userid,rev,countries_tags,t,diffs | gzip -9 > "new.${PREFIX}_recent_changes.jsonl.gz" && \
mv new.${PREFIX}_recent_changes.jsonl.gz ${PREFIX}_recent_changes.jsonl.gz

# Copy files to AWS S3 using MinIO client
mc cp \
${PREFIX}-products.jsonl.gz \
${PREFIX}_recent_changes.jsonl.gz \
${PREFIX}-mongodbdump.gz \
s3/openfoodfacts-ds
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment: you can add & if you want to do it in the background.


popd > /dev/null # data
Loading