Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore failed from VolumeSnapshot when G unit is specified #550

Closed
ysakashita opened this issue Mar 18, 2021 · 2 comments
Closed

Restore failed from VolumeSnapshot when G unit is specified #550

ysakashita opened this issue Mar 18, 2021 · 2 comments

Comments

@ysakashita
Copy link

Describe the bug

If I specify G unit of size in PVC, I get a failure to restore from snapshot.
The reason is that the size requires a larger capacity than the size of the source PV.

$ kubectl describe pvc restore-block 
...
Events:
  Type     Reason                Age                  From                                                                                     Message
  ----     ------                ----                 ----                                                                                     -------
  Normal   Provisioning          9s (x8 over 2m14s)   csi.trident.netapp.io_trident-csi-58f9ffd585-sfmqv_479e54a9-5597-45e0-81ac-23367f8b3edd  External provisioner is provisioning volume for claim "default/restore-block"
  Warning  ProvisioningFailed    9s (x8 over 2m14s)   csi.trident.netapp.io_trident-csi-58f9ffd585-sfmqv_479e54a9-5597-45e0-81ac-23367f8b3edd  failed to provision volume with StorageClass "ontap-block": error getting handle for DataSource Type VolumeSnapshot by Name snap-block-0: requested volume size 10000000000 is less than the size 10001317888 for the source snapshot snap-block-0
  Normal   ExternalProvisioning  5s (x10 over 2m14s)  persistentvolume-controller                                                              waiting for a volume to be created, either by external provisioner "csi.trident.netapp.io" or manually created by system administrator

Environment
Provide accurate information about the environment to help us reproduce the issue.

  • Trident version: v20.10.1
  • Trident installation flags used: -n trident
  • Container runtime: Docker 20.10.2
  • Kubernetes version: 1.18.15
  • Kubernetes orchestrator: none
  • Kubernetes enabled feature gates: none
  • OS: Ubuntu 20.04.1
  • NetApp backend types: ONTAP AFF 9.5

To Reproduce

  1. Create source volume (size 10G)
  • mc-test-rw-block.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mc-test-block
spec:
  serviceName: mc-test-block
  replicas: 1
  selector:
    matchLabels:
      app: mc-test-block
  template:
    metadata:
      labels:
        app: mc-test-block
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - mc-test-block
            topologyKey: "topology.kubernetes.io/zone"
      containers:
      - name: mc-test-block
        image: ubuntu:18.04
        volumeMounts:
        - name: block
          mountPath: /mnt/block
        command: ["/bin/sh"]
        args: ["-c", "while true; do /bin/date >>/mnt/block/time.log; sleep 1; done"]
  volumeClaimTemplates:
  - metadata:
      name: block
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: ontap-block
      resources:
        requests:
          storage: 10G
$ kubectl apply -f mc-test-rw-block.yaml 
statefulset.apps/mc-test-block created

$ kubectl get pod,pvc
NAME                  READY   STATUS    RESTARTS   AGE
pod/mc-test-block-0   1/1     Running   0          30s

NAME                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/block-mc-test-block-0   Bound    pvc-b5512c5d-e180-4ce4-9ab1-e83c8bfc70ea   10Gi       RWO            ontap-block    30s
  1. Run VolumeSnapshot
  • snapshot-block.yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: snap-block-0
spec:
  volumeSnapshotClassName: csi-trident-netapp-io
  source:
    persistentVolumeClaimName: block-mc-test-block-0
$ kubectl apply -f snapshot-block.yaml 
volumesnapshot.snapshot.storage.k8s.io/snap-block-0 created

$ kubectl get volumesnapshots.snapshot.storage.k8s.io 
NAME           READYTOUSE   SOURCEPVC               SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS           SNAPSHOTCONTENT                                    CREATIONTIME   AGE
snap-block-0   true         block-mc-test-block-0                           9538Mi        csi-trident-netapp-io   snapcontent-c339fe9f-825f-4f4c-972e-157c42e3ec0b   29s            29s
  1. Run Restore
  • restore-block.yaml (size 10G : same as the source PV)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: restore-block
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: ontap-block
  resources:
    requests:
      storage: 10G
  dataSource:
    apiGroup: snapshot.storage.k8s.io
    kind: VolumeSnapshot
    name: snap-block-0
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: restore-check-block
  name: restore-check-block
spec:
  containers:
  - image: ubuntu:18.04
    name: restore-check-block
    command:
    - sleep
    - infinity
    volumeMounts:
    - mountPath: /mnt/data
      name: myvolume
  volumes:
  - name: myvolume
    persistentVolumeClaim:
      claimName: restore-block
$ kubectl apply -f restore-block.yaml 
persistentvolumeclaim/restore-block created
pod/restore-check-block created

$ kubectl get pod,pvc
NAME                      READY   STATUS    RESTARTS   AGE
pod/mc-test-block-0       1/1     Running   0          9m31s
pod/restore-check-block   0/1     Pending   0          68s

NAME                                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/block-mc-test-block-0   Bound     pvc-b5512c5d-e180-4ce4-9ab1-e83c8bfc70ea   10Gi       RWO            ontap-block    9m31s
persistentvolumeclaim/restore-block           Pending                                                                        ontap-block    68s

$ kubectl describe pvc restore-block 
...
Events:
  Type     Reason                Age                  From                                                                                     Message
  ----     ------                ----                 ----                                                                                     -------
  Normal   Provisioning          9s (x8 over 2m14s)   csi.trident.netapp.io_trident-csi-58f9ffd585-sfmqv_479e54a9-5597-45e0-81ac-23367f8b3edd  External provisioner is provisioning volume for claim "default/restore-block"
  Warning  ProvisioningFailed    9s (x8 over 2m14s)   csi.trident.netapp.io_trident-csi-58f9ffd585-sfmqv_479e54a9-5597-45e0-81ac-23367f8b3edd  failed to provision volume with StorageClass "ontap-block": error getting handle for DataSource Type VolumeSnapshot by Name snap-block-0: requested volume size 10000000000 is less than the size 10001317888 for the source snapshot snap-block-0
  Normal   ExternalProvisioning  5s (x10 over 2m14s)  persistentvolume-controller                                                              waiting for a volume to be created, either by external provisioner "csi.trident.netapp.io" or manually created by system administrator

Expected behavior
I want restore to succeed with the same size specification as Source PV.

Additional context

As far as I have checked, this problem does not occur in Gi units.

@gnarl
Copy link
Contributor

gnarl commented Aug 16, 2021

We are currently investigating how to best fix this issue. As mentioned in #617 a user can also experience this error when using G instead of Gi when requesting a PVC clone.

@gnarl
Copy link
Contributor

gnarl commented Aug 31, 2021

This issue was fixed with commit 40b3d1b and is included in the Trident v21.07.1 release.

@gnarl gnarl closed this as completed Aug 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants