You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have the following situation: Standard K8S cluster with trident 18.01.00 (client and server).
I created one backend with ONTAP 9.3 - svm1 which has access to 3 x aggregates (ssd, hybrid and hdd)
Here is the backend definition: { "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "172.20.xx", "dataLIF": "172.20.xx", "svm": "svm1", "username": "vsadmin", "password": "xxx", "storagePrefix": "k8s", "defaults": { "snapshotPolicy": "k8s-snap-silver-policy" } }
Then I created two storage classes silver and gold for hdd and ssd aggregates (media: "ssd" and media: "hdd"). When checking the details in trident with tridentctl -n trident get backend -o json I can see that the silver storageclass is bound to the the hdd aggregate and the gold storageclass to the ssd aggregate.
Afterwards I create two PVCs in K8S - one silver and one gold class and they are getting created accordingly.
So far so good. Now I create a svm2 on the same ONTAP cluster that has access to the same 3 x aggregates and in trident I create a second backend that points to this svm. Here is the config:
Why is the second backend showing 2 volumes? When checking the details it lists the same volumes which are one a different backend, different svm and a have different prefix?
When I delete one of the volumes it gets worse. It is correctly deleted from the first backend with the svm1 but it still stays visible for the second backend. So it stays there forever and it is not possible to clean it up.
`[root@sheger-k8s-node4 setup]# tridentctl -n trident get backend
+-------------------------+----------------+--------+---------+
| NAME | STORAGE DRIVER | ONLINE | VOLUMES |
+-------------------------+----------------+--------+---------+
I have the following situation: Standard K8S cluster with trident 18.01.00 (client and server).
I created one backend with ONTAP 9.3 - svm1 which has access to 3 x aggregates (ssd, hybrid and hdd)
Here is the backend definition:
{ "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "172.20.xx", "dataLIF": "172.20.xx", "svm": "svm1", "username": "vsadmin", "password": "xxx", "storagePrefix": "k8s", "defaults": { "snapshotPolicy": "k8s-snap-silver-policy" } }
Then I created two storage classes silver and gold for hdd and ssd aggregates (media: "ssd" and media: "hdd"). When checking the details in trident with
tridentctl -n trident get backend -o json
I can see that the silver storageclass is bound to the the hdd aggregate and the gold storageclass to the ssd aggregate.Afterwards I create two PVCs in K8S - one silver and one gold class and they are getting created accordingly.
So far so good. Now I create a svm2 on the same ONTAP cluster that has access to the same 3 x aggregates and in trident I create a second backend that points to this svm. Here is the config:
{ "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "172.20.zz", "dataLIF": "172.20.zz", "svm": "svm2", "username": "vsadmin", "password": "xx", "storagePrefix": "sec" }
Now I just look at the result:
`[root@sheger-k8s-node4 setup]# tridentctl -n trident get volume
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| NAME | SIZE | STORAGE CLASS | PROTOCOL | BACKEND | POOL |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| default-sample-volume-silver-e2199 | 1.0 GiB | silver | file | ontapnas_172.20.xx | node2_data_sas |
| default-sample-volume-gold-7ccc5 | 1.0 GiB | gold | file | ontapnas_172.20.xx | node1_data_ssd |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+`
Still correct. But when I check the backend it show 4 x volumes
+-------------------------+----------------+--------+---------+
| NAME | STORAGE DRIVER | ONLINE | VOLUMES |
+-------------------------+----------------+--------+---------+
| ontapnas_172.20.xx | ontap-nas | true | 2 |
| ontapnas_172.20.zz | ontap-nas | true | 2 |
+-------------------------+----------------+--------+---------+
Why is the second backend showing 2 volumes? When checking the details it lists the same volumes which are one a different backend, different svm and a have different prefix?
When I delete one of the volumes it gets worse. It is correctly deleted from the first backend with the svm1 but it still stays visible for the second backend. So it stays there forever and it is not possible to clean it up.
`[root@sheger-k8s-node4 setup]# tridentctl -n trident get backend
+-------------------------+----------------+--------+---------+
| NAME | STORAGE DRIVER | ONLINE | VOLUMES |
+-------------------------+----------------+--------+---------+
| ontapnas_172.20.zz | ontap-nas | true | 2 |
| ontapnas_172.20.xx | ontap-nas | true | 1 |
+-------------------------+----------------+--------+---------+
[root@sheger-k8s-node4 setup]# tridentctl -n trident get volume
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| NAME | SIZE | STORAGE CLASS | PROTOCOL | BACKEND | POOL |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| default-sample-volume-silver-e2199 | 1.0 GiB | silver | file | ontapnas_172.20.xx | node2_data_sas |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+`
The text was updated successfully, but these errors were encountered: