-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcdctlv3: snapshot/recover design #4896
Comments
/cc @heyitsanthony |
Right.
Probably we can add a --verify flag? So etcdctl can help to verify the snapshot. Or the verification will delay to server startup time. On the other side, we do expect user to verify snapshot after they store it somewhere. So a force verification seems to be redundant for some use case. |
@xiang90 how much validation is done on |
@colhom etcdctl backup guarantees to return a good snapshot when exit with 0. |
Ok then, imho we don't need a |
@colhom I feel so. Thanks! |
I think auth and alarms would need special handling too. Maybe it'd make sense to have a special etcd KV space and use the KV machinery already in place (watchers, get) instead of managing custom buckets? |
@heyitsanthony In most of the cases (if not all), users just want to backup the static data not dynamic ones like leases, watcher or alarm I think. |
@heyitsanthony Shall we close this one? |
@xiang90 yeah seems like it's all hammered out |
Snapshot
Snapshot command should get a point-in-time consistent snapshot of the v3 KV space
Snapshot command should not snapshot any configuration related data like clusterID or memberID.
TODO: handle lease?
TODO: support incremental snapshot?
Recover
Recover command should write configuration data provided by user like (--initial-cluster, --initial-token, etc..) and create correct member dir.
Recover command should take a snapshot and put it into the right location in the created member dir.
The text was updated successfully, but these errors were encountered: