-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using volumeClaimTemplates result in PVCs being deleted before pods come up #186
Comments
@ryanmorris708 can you confirm which druid operator version are you using ? |
Here is my chart.yaml for the Operator:
|
@ryanmorris708 can you send me your storage class yaml please ? |
can you try adding this parameter to your storageclass |
The volumeBindingMode is immutable and this StorageClass is in use by other deployments, so I have created an identical StorageClass called csi-storageclass-2 with just the volumeBindingMode differing. I observed the same behavior where the PVCs were deleted immediately.
Output of "kubectl describe storageclass csi-storageclass" for comparison (the StorageClass that I used for my initial post):
|
@ryanmorris708 The operator tries to remove unused resources and somehow pvc is getting caught in this. Have sent a fix here https://github.com/druid-io/druid-operator/pull/187/files cc @himanshug |
I have attempted to deploy a namespace-scope Druid Operator and Druid CR cluster using volumeClaimTemplates for the MiddleManagers and Historicals. However, the PVCs are deleted immediately after creation -- the deletions are logged by the Operator -- and the MiddleManagers and Historicals remain in the Pending state due to the missing PVCs.
I have tried setting deleteOrphanPVC: false and/or disablePVCDeletionFinalizer: true, but these do not have any effect.
Manually provisioning the PVCs before deploying the cluster is not a good option, since the StatefulSet will not allow Pods to bind to different PVCs unless they are provisioned dynamically with the volumeClaimTemplates. Therefore, having multiple Historicals or MiddleManagers is not possible, since they would be forced to use the same segment cache, log files, tmp directory, etc.
Please see my definitions and debug output below. I have omitted the node definitions and logs except for the MiddleManager to shorten this post.
Druid Operator my-values.yaml:
Druid CR my-tiny-cluster.yaml (omitted node definitions except for MM with PVC):
Output of "kubectl -n druid get pods":
Output of "kubectl -n druid get pvc":
No resources found in druid namespace.
Output of "kubectl -n druid logs druid-operator-nonprod-8585747989-jpmvq" (omitted node logs except for MM with PVC):
The text was updated successfully, but these errors were encountered: