Skip to content

Commit

Permalink
Alpha->beta migration hints
Browse files Browse the repository at this point in the history
  • Loading branch information
mattcary committed May 18, 2020
1 parent 4d569a6 commit 746d42b
Showing 1 changed file with 142 additions and 0 deletions.
142 changes: 142 additions & 0 deletions docs/kubernetes/user-guides/snapshots.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,145 @@
lost+found
sample-file.txt
```

### Tips on Migrating from Alpha Snapshots

The api version has changed between the alpha and beta releases of the CSI
snapshotter. In many cases, snapshots created with the alpha will not be able to
be used with a beta driver. In some cases, they can be migrated.

*You may lose all data in your snapshot!*

*These instructions not work in your case and could delete all data in your
snapshot!*

These instructions happened to work with a particular GKE configuration. Because
of the variety of alpha deployments, it is not possible to verify these steps
will work all of the time. The following has been tested with a GKE cluster of
master version 1.17.5-gke.0 using the following image versions for the
snapshot-controller and PD CSI driver:

* snapshot-controller:v2.0.1
* csi-provisioner:v1.5.0-gke.0
* gke.gcr.io/csi-attacher:v2.1.1-gke.0
* gke.gcr.io/csi-resizer:v0.4.0-gke.0
* gke.gcr.io/csi-snapshotter:v2.1.1-gke.0
* gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0

#### Overview

The idea is to provision a `VolumeSnapshotContents` in your new or upgraded beta
PD CSI driver cluster with the PD snapshot handle from the alpha snapshot, then
bind that to a new beta `VolumeSnapshot`.

If you are able to set (or have already set) the `deletionPolicy` of any
existing alpha `VolumeSnapshotContents` and alpha `VolumeSnapshotClass` to
`Retain`, then deleting the snapshot CRs will not delete the underlying PD
snapshot. This can provision a beta `VolumeSnapshotContents` as described below.

If you are creating a new cluster rather than upgrading an existing one, the
method below could also be used before deleting the alpha cluster and/or alpha
PD CSI driver deployment. This may be a safer option if you are not sure about
the deletion policy of your alpha snapshot.

#### Steps to restore from an existing snapshot.

These steps are to take place in a cluster with the beta PD CSI driver
installed. At not point is the alpha cluster or driver referenced. After this is
done, using the alpha snapshot may conflict with the resources created below in
the beta cluster.

1. Confirm in
[console.cloud.google.com/compute/snapshots](https://console.cloud.google.com/compute/snapshots)
that your desired snapshots exist. Note the snapshot name, which looks like
`snapshot-XXXXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXXXXX`, and set an env variable:
`export SNAPSHOT_NAME=snapshot-XXXXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXXXXX` (copy
in your exact name).
1. Export your project id: `export PROJECT_ID=<your project id>`.
1. Create a `SnapshotVolume` resource which will be bound to a pre-provisioned
`SnapshotVolumeContents`. Note this is called `restored-snapshot`; this
name can be changed, but do it consistently across the other resources.
```console
kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: restored-snapshot
spec:
volumeSnapshotClassName: csi-gce-pd-snapshot-class
source:
volumeSnapshotContentName: snapcontent-migrated
EOF
```
1. Create a `SnapshotVolumeContents` pointing to your existing PD
snapshot.
```console
kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotContent
metadata:
name: restored-snapshot-content
spec:
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
source:
snapshotHandle: projects/$PROJECT_ID/global/snapshots/$SNAPSHOT_ID
volumeSnapshotRef:
kind: VolumeSnapshot
name: restored-snapshot
namespace: default
EOF
```
1. Create a `PersistentVolumeClaim` which will pull from the
`VolumeSnapshot`. The `StorageClass` must match what you use to provision
PDs; what is below matches the example used above.
```console
kubectl apply -f - <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: restored-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd
resources:
requests:
storage: 6Gi
dataSource:
kind: VolumeSnapshot
name: restored-snapshot
apiGroup: snapshot.storage.k8s.io
EOF
```
1. Finally, create a pod referring to the `PersistentVolumeClaim`. The PD CSI
driver will provision a `PersistentVolume` and populate it from the
snapshot. The pod from `examples/kubernetes/snapshot/restored-pod.yaml` is
set to use the PVC created above. After `kubectl apply`'ing the pod, run
`kubectl exec restored-pod -- ls -l /demo/data/` to confirm that the
snapshot has been restored correctly.

#### Troubleshooting

If the `VolumeSnapshot`, `VolumeSnapshotContents` and `PersistentVolumeClaim`
are not all mutually syncrhonized, the pod will not start. Try `kubectl describe
volumesnapshotcontent restored-snapshot-content` to see any error messages
relating to the binding together of the `VolumeSnapshot` and the
`VolumeSnapshotContents`. Further errors may be found in the snapshot controller
logs:

```console
kubectl logs snapshot-controller-0 | tail
```

Any errors in `kubectl describe pvc restored-pvc` may also shed light on any
troubles. Sometimes there is quite a wait to dynamically provision the
`PersistentVolume` for the PVC; the following command will give more
information.

```console
kubectl logs -n gce-pd-csi-driver csi-gce-pd-controller-0 -c csi-provisioner | tail
```

The maintainers will welcome any additional information and will be happy to
review PRs clarifying or extending these tips.

0 comments on commit 746d42b

Please sign in to comment.