Skip to content

Latest commit

 

History

History
79 lines (62 loc) · 6.63 KB

v2.2.0.md

File metadata and controls

79 lines (62 loc) · 6.63 KB

vSphere CSI Driver - v2.2.0 release

New Feature

Notable Changes

  • Added support for running vSphere CSI driver on VMware Cloud™ on AWS (VMC). VMC will only support block volume for now. The minimum SDDC version required to run vSphere CSI driver is 1.12. Please refer to VMC release notes to get more details.
  • Datastore privilege check to filter out shared datastores which do not have Datastore.FileManagement privilege. Refer to notes provided for CNS-DATASTORE role at vSphere Roles and Privileges
  • Telemetry collection support using cluster-distribution field in the vSphere Config Secret. Refer to notes provided for cluster-distribution in vSphere configuration secret.
  • Bug fixes (Issues #1, 2, 6, 7, and 8 mentioned in the known issues section of v2.1.1 are fixed in v2.2.0 release)
    • Fixed recreating static file share volume with the same PV name.
    • Fixed metadata syncer to disallow deleting volume physically from the datastore when PV is deleted by user.
    • Fixed VCP to CSI Migration to support datastore path in the volume handle. Refer to issue-628 for more detail.
    • Fixed CSI full sync not registering volumes and was failing with Duplicated entity for each entity type in one cluster.
    • Fixed volume provisioning when empty datacenter is present in the vCenter inventory.

Deployment files

Kubernetes Release

  • Minimum: 1.18
  • Maximum: 1.20

Supported sidecar containers versions

  • csi-provisioner - v2.1.0
  • csi-attacher - v3.1.0
  • csi-resizer - v1.1.0
  • livenessprob - v2.2.0
  • csi-node-driver-registrar - v2.1.0

Known Issues

vSphere CSI Driver issues

  1. Migrated in-tree vSphere volume deleted by in-tree vSphere plugin remains on the CNS UI.
    • Impact: Migrated in-tree vSphere volumes deleted by in-tree vSphere plugin remains on the CNS UI.
    • Workaround: Admin needs to manually reconcile discrepancies in the Managed Virtual Disk Catalog. Admin needs to follow this KB article - https://kb.vmware.com/s/article/2147750
  2. Volume expansion might fail when it is called with pod creation simultaneously. This issue is fixed in vSphere 7.0u2, but present on vSphere 7.0u1 and 7.0.
    • Impact: Users can resize the PVC and create a pod using that PVC simultaneously. In this case, pod creation might be completed first using the PVC with original size. Volume expansion will fail because online resize is not supported in vSphere 7.0 Update1.
    • Workaround: Wait for the PVC to reach FileVolumeResizePending condition before attaching a pod to it.
  3. Deleting PV before deleting PVC, leaves orphan volume on the datastore.
    • Impact: Orphan volumes remain on the datastore, and admin needs to delete those volumes manually using govc command.
    • Upstream issue is tracked at: kubernetes-csi/external-provisioner#546
    • Workaround:
      • No workaround. User should not attempt to delete PV which is bound to PVC. User should only delete a PV if they know that the underlying volume in the storage system is gone.
      • If user has accidentally left orphan volumes on the datastore by not following the guideline, and if user has captured the volume handles or First Class Disk IDs of deleted PVs, storage admin can help delete those volumes using govc disk.rm <volume-handle/FCD ID> command.
  4. vSAN file share volumes (ReadWriteMany/ ReadOnlyMany vSphere CSI Volumes) become inaccessible after vSphere CSI driver Node DaemonSet Pod restarts.
    • Impact: Pod can not write/read data on vSAN file share volumes
    • Workaround: Remount vSAN file services volumes used by the application pods at the same location from the Node VM Guest OS directly. Detailed steps are available here.
  5. user and ca-file change in the vsphere-config-secret is not honored until vSphere CSI Controller Pod restarts.
    • Impact: Volume lifecycle operations fail until the controller pod restarts.
    • Workaround: Restart vsphere-csi-controller deployment Pod.

Kubernetes issues

  1. Filesystem resize is skipped if the original PVC is deleted when FilesystemResizePending condition is still on the PVC, but PV and its associated volume on the storage system are not deleted due to the Retain policy.
    • Issue: kubernetes/kubernetes#88683
    • Impact: User may create a new PVC to statically bind to the undeleted PV. In this case, the volume on the storage system is resized but the filesystem is not resized accordingly. User may try to write to the volume whose filesystem is out of capacity.
    • Workaround: User can log into the container to manually resize the filesystem.
  2. Volume associated with a Statefulset cannot be resized
  3. Recover from volume expansion failure.
    • Impact: If a user tries to expand a PVC to a size which may not be supported by the underlying storage system, volume expansion will keep failing and there is no way to recover.
    • Issue: kubernetes/enhancements#1516
    • Workaround: None

vSphere issues

  1. CNS file volume has a limitation of 8K for metadata.
    • Impact: It is quite possible that we will not be able to push all the metadata to CNS file share as we need support a max of 64 clients per file volume.
    • Workaround: None, This is vSphere limitation.