You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
After installing azurefile-csi-driver and azuredisk-csi-driver in a Kubernetes cluster, csi-attacher container, inside *csi-azurefile-controller and csi-azuredisk-controller pods, is crashing every 1 or 2 minutes with the following message:
csi-attacher log:
...
I1210 14:36:58.833263 1 reflector.go:153] Starting reflector *v1beta1.CSINode (10m0s) from k8s.io/client-go/informers/factory.go:135
I1210 14:36:58.833363 1 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
runtime: mlock of signal stack failed: 12
runtime: increase the mlock limit (ulimit -l) or
runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
fatal error: mlock failed
runtime stack:
runtime.throw(0x15dc7fc, 0xc)
/usr/lib/go-1.14/src/runtime/panic.go:1112 +0x72
runtime.mlockGsignal(0xc000502a80)
/usr/lib/go-1.14/src/runtime/os_linux_x86.go:72 +0x107
runtime.mpreinit(0xc000500380)
/usr/lib/go-1.14/src/runtime/os_linux.go:341 +0x78
runtime.mcommoninit(0xc000500380)
/usr/lib/go-1.14/src/runtime/proc.go:630 +0x108
runtime.allocm(0xc00006b000, 0x167d548, 0x22b8718)
/usr/lib/go-1.14/src/runtime/proc.go:1390 +0x14e
runtime.newm(0x167d548, 0xc00006b000)
/usr/lib/go-1.14/src/runtime/proc.go:1704 +0x39
runtime.startm(0x0, 0xc0004ca401)
/usr/lib/go-1.14/src/runtime/proc.go:1869 +0x12a
runtime.wakep(...)
/usr/lib/go-1.14/src/runtime/proc.go:1953
runtime.resetspinning()
/usr/lib/go-1.14/src/runtime/proc.go:2415 +0x93
runtime.schedule()
/usr/lib/go-1.14/src/runtime/proc.go:2527 +0x2de
runtime.mstart1()
/usr/lib/go-1.14/src/runtime/proc.go:1104 +0x8e
runtime.mstart()
/usr/lib/go-1.14/src/runtime/proc.go:1062 +0x6e
goroutine 1 [select]:
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1.1(0x1380640, 0x0, 0xc000628180)
...
What you expected to happen:
The container should not fail so frequently.
How to reproduce it:
The failure started right after installing v0.7.0 of azurefile-csi-driver. I upgraded to v0.9.0 (for both, azurefile and azuredisk) with the same results. The Kubernetes cluster is composed of 3 master nodes and 3 workers running on Azure VMs (not AKS).
Anything else we need to know?:
Found a couple issues in golang/go repository that seems to be related:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
k8s-ci-robot
added
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
and removed
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
labels
Apr 10, 2021
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
From kubernetes-sigs/azurefile-csi-driver#495
What happened:
After installing azurefile-csi-driver and azuredisk-csi-driver in a Kubernetes cluster, csi-attacher container, inside *csi-azurefile-controller and csi-azuredisk-controller pods, is crashing every 1 or 2 minutes with the following message:
csi-attacher log:
What you expected to happen:
The container should not fail so frequently.
How to reproduce it:
The failure started right after installing v0.7.0 of azurefile-csi-driver. I upgraded to v0.9.0 (for both, azurefile and azuredisk) with the same results. The Kubernetes cluster is composed of 3 master nodes and 3 workers running on Azure VMs (not AKS).
Anything else we need to know?:
Found a couple issues in golang/go repository that seems to be related:
Possibly upgrading golang version from 1.14 to 1.15 will solve the problem.
Environment:
kubectl version
): v1.19.14uname -a
): 5.4.0-1032-azure PluginCapability update for external-attacher #33-Ubuntu SMP Fri Nov 13 14:23:34 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxComplete log file: csi-attacher.log
The text was updated successfully, but these errors were encountered: