-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Etcd snapshots retention when node name changes #8099
Etcd snapshots retention when node name changes #8099
Conversation
Signed-off-by: Vitor <[email protected]>
Signed-off-by: Vitor <[email protected]>
Signed-off-by: Vitor <[email protected]>
Signed-off-by: Vitor <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3 E2E passes with this PR:
Ran 8 of 8 Specs in 115.931 seconds
SUCCESS! -- 8 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: Test_E2ES3 (115.93s)
PASS
ok command-line-arguments 115.934s
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #8099 +/- ##
==========================================
+ Coverage 44.45% 51.51% +7.05%
==========================================
Files 140 143 +3
Lines 14508 14568 +60
==========================================
+ Hits 6449 7504 +1055
+ Misses 6963 5873 -1090
- Partials 1096 1191 +95
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
Fixed the etcd retention to delete orphaned snapshots Signed-off-by: Vitor <[email protected]>
I think we should go ahead with this current change as-is, and document and/or release-note that multiple clusters should not store snapshots in the same bucket+prefix. Doing so already creates issues, as snapshots cross-populate across clusters if they are stored in the same location. With this change, clusters will both enforce retention limits on each others files if they happen to be in the same place, and this is working as designed. In the future, as part of work on #8064, we will store additional metadata alongside the snapshot files in order to allow RKE2 to determine whether or not the snapshot file is owned by the cluster. |
Proposed Changes
Types of Changes
Verification
Need to create a cluster with etcd snapshots enabled and s3
Then you can see in the /var/lib/rancher/k3s/server/db/snapshots that will maintain the retention
If you are using AWS, you can just reboot the machine to have another name, and just restarted the cluster, it will still delete the snapshot inside
/var/lib/rancher/server/db/snapshots
and maintain the retentionTesting
Linked Issues
User-Facing Change
Further Comments