You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When ever I try and deploy this chart the Readiness probe fails.
helm install --name efk .
NAME: efk
LAST DEPLOYED: Fri Feb 22 12:32:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
efk-fluentd-elasticsearch-config 6 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efk-elasticsearch ClusterIP 172.20.190.54 <none> 9200/TCP 1s
efk-kibana ClusterIP 172.20.207.84 <none> 5601/TCP 1s
==> v1beta2/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
efk-kibana 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
efk-kibana-69cdf67b6f-c2ncb 0/1 ContainerCreating 0 0s
NOTES:
Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=kibana,release=efk" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:5601
Name: efk-kibana-69cdf67b6f-c2ncb
Namespace: default
Priority: 0
PriorityClassName:
Node: k8snode01/10.34.88.166
Start Time: Fri, 22 Feb 2019 12:32:32 +0000
Labels: app=kibana
pod-template-hash=69cdf67b6f
release=efk
Annotations: cni.projectcalico.org/podIP=172.16.3.75/32
Status: Running
IP: 172.16.3.75
Controlled By: ReplicaSet/efk-kibana-69cdf67b6f
Containers:
efk-kibana:
Container ID: docker://f25dd9c82d300b6bb9b9810f3d1a437e2d39ee3244ca1675d185c499def462dd
Image: docker.elastic.co/kibana/kibana:6.2.4
Image ID: docker://sha256:327c6538ba4c2dd9a7bc509c29e7cb57a0f121a00935401bbe7e8a96b9a46ddf
Port: 5601/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 22 Feb 2019 12:34:17 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 22 Feb 2019 12:32:33 +0000
Finished: Fri, 22 Feb 2019 12:34:16 +0000
Ready: False
Restart Count: 1
Limits:
cpu: 1
Requests:
cpu: 100m
Liveness: http-get http://:kibana-ui/ delay=45s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:kibana-ui/ delay=40s timeout=1s period=10s #success=1 #failure=3
Environment:
ELASTICSEARCH_URL: http://efk-elasticsearch:9200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czthw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-czthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czthw
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m default-scheduler Successfully assigned default/efk-kibana-69cdf67b6f-c2ncb to k8snode01
Normal Pulled 1m (x2 over 3m) kubelet, k8snode01 Container image "docker.elastic.co/kibana/kibana:6.2.4" already present on machine
Normal Created 1m (x2 over 3m) kubelet, k8snode01 Created container
Normal Started 1m (x2 over 3m) kubelet, k8snode01 Started container
Normal Killing 1m kubelet, k8snode01 Killing container with id docker://efk-kibana:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 24s (x6 over 2m) kubelet, k8snode01 Liveness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
Warning Unhealthy 15s (x9 over 2m) kubelet, k8snode01 Readiness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
The text was updated successfully, but these errors were encountered:
When ever I try and deploy this chart the Readiness probe fails.
NOTES:
export POD_NAME=$(kubectl get pods --namespace default -l "app=kibana,release=efk" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:5601
kubectl describe services efk-kibana
Name: efk-kibana
Namespace: default
Labels: app=kibana
chart=elasticsearch-fluentd-kibana
heritage=Tiller
release=efk
Annotations:
Selector: app=kibana,release=efk
Type: ClusterIP
IP: 172.20.207.84
Port: kibana-ui 5601/TCP
TargetPort: kibana-ui/TCP
Endpoints:
Session Affinity: None
Events:
kubectl describe pods efk-kibana-69cdf67b6f-c2ncb
Name: efk-kibana-69cdf67b6f-c2ncb
Namespace: default
Priority: 0
PriorityClassName:
Node: k8snode01/10.34.88.166
Start Time: Fri, 22 Feb 2019 12:32:32 +0000
Labels: app=kibana
pod-template-hash=69cdf67b6f
release=efk
Annotations: cni.projectcalico.org/podIP=172.16.3.75/32
Status: Running
IP: 172.16.3.75
Controlled By: ReplicaSet/efk-kibana-69cdf67b6f
Containers:
efk-kibana:
Container ID: docker://f25dd9c82d300b6bb9b9810f3d1a437e2d39ee3244ca1675d185c499def462dd
Image: docker.elastic.co/kibana/kibana:6.2.4
Image ID: docker://sha256:327c6538ba4c2dd9a7bc509c29e7cb57a0f121a00935401bbe7e8a96b9a46ddf
Port: 5601/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 22 Feb 2019 12:34:17 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 22 Feb 2019 12:32:33 +0000
Finished: Fri, 22 Feb 2019 12:34:16 +0000
Ready: False
Restart Count: 1
Limits:
cpu: 1
Requests:
cpu: 100m
Liveness: http-get http://:kibana-ui/ delay=45s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:kibana-ui/ delay=40s timeout=1s period=10s #success=1 #failure=3
Environment:
ELASTICSEARCH_URL: http://efk-elasticsearch:9200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czthw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-czthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czthw
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m default-scheduler Successfully assigned default/efk-kibana-69cdf67b6f-c2ncb to k8snode01
Normal Pulled 1m (x2 over 3m) kubelet, k8snode01 Container image "docker.elastic.co/kibana/kibana:6.2.4" already present on machine
Normal Created 1m (x2 over 3m) kubelet, k8snode01 Created container
Normal Started 1m (x2 over 3m) kubelet, k8snode01 Started container
Normal Killing 1m kubelet, k8snode01 Killing container with id docker://efk-kibana:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 24s (x6 over 2m) kubelet, k8snode01 Liveness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
Warning Unhealthy 15s (x9 over 2m) kubelet, k8snode01 Readiness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
The text was updated successfully, but these errors were encountered: