Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-3476: Add Volume Group Snapshot KEP #1551

Merged
merged 19 commits into from
Feb 8, 2023

Conversation

xing-yang
Copy link
Contributor

@xing-yang xing-yang commented Feb 13, 2020

This PR proposes a VolumeGroupSnapshot CRD.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 13, 2020
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Feb 13, 2020
@k8s-ci-robot k8s-ci-robot added sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels May 4, 2020
Copy link
Member

@saad-ali saad-ali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For creating a VolumeGroup I see three paths

  1. Create an empty VolumeGroup and request new un-provisioned volume(s) to be provisioned and added one by one.
  2. Create an empty VolumeGroup and request existing volume(s) to be added one by one.
  3. Create an non-empty VolumeGroup with a fixed set of new un-provisioned volume(s) to be provisioned.
  4. Create an non-empty VolumeGroup with a fixed existing set of volume(s).

Different storage vendors may support one more of these. We need to identify what is out there to figure out which of these cases to support.

To add to the complexity this may vary based on VolumeGroup feature: snapshot vs replication vs placement vs crash consistent writes.

Similarly, for snapshot restore there may be multiple possibilities:

  1. Group Snapshot returns only a GroupSnapshotID not individual snapshot IDs, therefore restore must be via a GroupSnapshotID.
  2. Group Snapshot returns only individual VolumeSnapshotID, therefore restore must be on a per snapshot basis.

I added an item to the SIG Storage Meeting Agenda tomorrow to ask for storage vendor input.

keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
Copy link

@rbo54 rbo54 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have left comments. Please feel free to contact me [email protected] if you wish.

keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 13, 2020
@xing-yang xing-yang force-pushed the volumegroup branch 4 times, most recently from 7c7d0fd to f94abd3 Compare June 17, 2020 15:22
keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
keps/sig-storage/20200212-volume-group.md Outdated Show resolved Hide resolved
@vishnuitta
Copy link

Consider a statefulset have 3 replicas, and having 2 claims in volumeClaimTemplates.. totally, it becomes 6 PVCs
With this KEP, would it be possible to tell the storage controller that 2 claims of each replica of statefulset are of one group, and there are 3 such groups so that controller can take care of placement of volumes

This is a common usecase where application is providing high availability by storing data in such a way that even if one pod goes down, application is running. But, if the underlying storage of those 6 PVCs comes from storage of single failure domain, application cannot run.

@xing-yang xing-yang force-pushed the volumegroup branch 2 times, most recently from bb52015 to 7b1cca4 Compare June 25, 2020 15:46
@xing-yang
Copy link
Contributor Author

xing-yang commented Jun 26, 2020

Consider a statefulset have 3 replicas, and having 2 claims in volumeClaimTemplates.. totally, it becomes 6 PVCs
With this KEP, would it be possible to tell the storage controller that 2 claims of each replica of statefulset are of one group, and there are 3 such groups so that controller can take care of placement of volumes

This is a common usecase where application is providing high availability by storing data in such a way that even if one pod goes down, application is running. But, if the underlying storage of those 6 PVCs comes from storage of single failure domain, application cannot run.

I think that should be possible. For the 2 PVCs on the 1st replica, we can add label replica-1, for the 2 PVCs on the 2nd replica, we can add label replica-2, and so on. We can create 3 VolumeGroups, one for PVCs on each replica. This is Immutable VolumeGroup with existing PVCs.

To support Mutable VolumeGroup with StatefulSet and place PVCs in each replica in a different group would require enhancement of the StatefulSet controller. If we always create a different group for PVCs in each replica, we can modify the StatefulSet controller to add the pod name as a suffix to the group name, but if we sometimes need to create a different group for PVCs in each replica, sometimes need to create just 1 group for all the PVCs in all replicas, and sometimes need to create a group for 2 out of 3 replicas, it will be tricky.

@xing-yang xing-yang force-pushed the volumegroup branch 2 times, most recently from cbd977f to 9a50a09 Compare June 26, 2020 04:20
@xing-yang xing-yang force-pushed the volumegroup branch 3 times, most recently from 2a26311 to 03c1e17 Compare January 25, 2023 16:00
@xing-yang xing-yang changed the title KEP-3476: Add Volume Group KEP KEP-3476: Add Volume Group Snapshot KEP Jan 26, 2023
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
@xing-yang
Copy link
Contributor Author

@msau42 I addressed your comments. PTAL. Thanks.

// +optional
Message *string
VolumeSnapshotRefList []core_v1.ObjectReference
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since these can only be snapshot kinds, we could make our own SnapshotReference type that only includes name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we only need snapshot names, can it just be a list of strings?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can follow up on this during a more detailed API review, but I believe the guidance is to define a type scoped to your supported sources.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

keps/sig-storage/3476-volume-group-snapshot/README.md Outdated Show resolved Hide resolved
@jsafrane
Copy link
Member

jsafrane commented Feb 8, 2023

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 8, 2023
@k8s-ci-robot k8s-ci-robot merged commit 1ca09eb into kubernetes:master Feb 8, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.27 milestone Feb 8, 2023
RomanBednar pushed a commit to RomanBednar/enhancements that referenced this pull request Mar 1, 2024
…-configuration

USHIFT-2196: User-facing audit-log configuration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

VolumeGroupSnapshot