Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restart: waiting for component=kube-apiserver: timed out waiting for the condition #4221

Closed
Th3G4mbl3r opened this issue May 8, 2019 · 3 comments
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver

Comments

@Th3G4mbl3r
Copy link

The exact command to reproduce the issue:

minikube start

The full output of the command that failed:

➜ “ “minikube start
😄 minikube v1.0.0 on darwin (amd64)
🤹 Downloading Kubernetes v1.14.0 images in the background ...
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
📶 "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.2-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.14.0 ...
🔄 Relaunching Kubernetes v1.14.0 using kubeadm ...
⌛ Waiting for pods: apiserver
💣 Error restarting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
❌ Problems detected in "kube-addon-manager":
error: no objects passed to apply

The output of the minikube logs command:

==> coredns <==
.:53
2019-05-08T03:56:57.77Z [INFO] CoreDNS-1.2.6
2019-05-08T03:56:57.775Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
[INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
E0508 03:57:22.770553 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0508 03:57:22.771078 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0508 03:57:22.771242 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <==
[ +5.000998] hpet1: lost 318 rtc interrupts
[ +5.000309] hpet1: lost 318 rtc interrupts
[ +5.001248] hpet1: lost 318 rtc interrupts
[May 8 04:08] hpet1: lost 319 rtc interrupts
[ +5.000961] hpet1: lost 318 rtc interrupts
[ +5.001291] hpet1: lost 318 rtc interrupts
[ +5.000474] hpet1: lost 318 rtc interrupts
[ +5.000844] hpet1: lost 318 rtc interrupts
[ +5.001098] hpet1: lost 318 rtc interrupts
[ +5.001410] hpet1: lost 319 rtc interrupts
[ +5.001171] hpet1: lost 318 rtc interrupts
[ +5.001519] hpet1: lost 318 rtc interrupts
[ +5.000508] hpet1: lost 318 rtc interrupts
[ +5.001117] hpet1: lost 318 rtc interrupts
[ +5.000982] hpet1: lost 318 rtc interrupts
[May 8 04:09] hpet1: lost 318 rtc interrupts
[ +5.001663] hpet1: lost 318 rtc interrupts
[ +5.000760] hpet1: lost 318 rtc interrupts
[ +5.000615] hpet1: lost 319 rtc interrupts
[ +5.000811] hpet1: lost 319 rtc interrupts
[ +5.001267] hpet1: lost 318 rtc interrupts
[ +5.001165] hpet1: lost 318 rtc interrupts
[ +5.000821] hpet1: lost 318 rtc interrupts
[ +5.001079] hpet1: lost 319 rtc interrupts
[ +5.001400] hpet1: lost 318 rtc interrupts
[ +5.001608] hpet1: lost 318 rtc interrupts
[ +5.000745] hpet1: lost 318 rtc interrupts
[May 8 04:10] hpet1: lost 318 rtc interrupts
[ +5.001343] hpet1: lost 318 rtc interrupts
[ +5.001337] hpet1: lost 318 rtc interrupts
[ +5.000858] hpet1: lost 318 rtc interrupts
[ +5.001078] hpet1: lost 318 rtc interrupts
[ +5.000693] hpet1: lost 318 rtc interrupts
[ +5.001161] hpet1: lost 318 rtc interrupts
[ +5.001215] hpet1: lost 318 rtc interrupts
[ +5.000560] hpet1: lost 318 rtc interrupts
[ +5.001602] hpet1: lost 318 rtc interrupts
[ +5.001207] hpet1: lost 319 rtc interrupts
[ +5.001609] hpet1: lost 318 rtc interrupts
[May 8 04:11] hpet1: lost 318 rtc interrupts
[ +5.000780] hpet1: lost 318 rtc interrupts
[ +5.000774] hpet1: lost 318 rtc interrupts
[ +5.000824] hpet1: lost 318 rtc interrupts
[ +5.001290] hpet1: lost 318 rtc interrupts
[ +5.000918] hpet1: lost 318 rtc interrupts
[ +5.002051] hpet1: lost 318 rtc interrupts
[ +5.000597] hpet1: lost 318 rtc interrupts
[ +5.001062] hpet1: lost 318 rtc interrupts
[ +5.001225] hpet1: lost 318 rtc interrupts
[ +5.001087] hpet1: lost 318 rtc interrupts

==> kernel <==
04:11:55 up 16 min, 0 users, load average: 4.00, 3.69, 2.68
Linux minikube 4.15.0 #1 SMP Fri Feb 15 19:27:06 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-08T04:08:51+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-08T04:09:49+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-08T04:09:51+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-08T04:10:49+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-08T04:10:51+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-08T04:11:49+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
deployment.extensions/default-http-backend unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
service/default-http-backend unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-08T04:11:51+00:00 ==

==> kube-apiserver <==
I0508 04:11:31.043373 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:31.043705 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:32.043908 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:32.044411 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:33.044891 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:33.045118 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:34.045413 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:34.045513 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:35.049926 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:35.050094 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:36.050433 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:36.050707 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:37.051062 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:37.051358 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:38.051648 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:38.052132 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:39.053259 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:39.053570 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:40.053854 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:40.054035 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:41.054249 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:41.054534 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:42.054866 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:42.055186 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:43.055478 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:43.055863 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:44.055981 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:44.056212 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:45.056997 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:45.057278 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:46.058297 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:46.058655 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:47.058743 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:47.059109 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:48.059716 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:48.059910 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:49.060027 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:49.060351 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:50.060677 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:50.060951 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:51.061534 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:51.061960 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:52.062310 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:52.062612 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:53.062979 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:53.063289 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:54.063421 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:54.063578 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0508 04:11:55.064293 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0508 04:11:55.064487 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

==> kube-proxy <==
W0508 03:56:56.710578 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0508 03:56:56.788517 1 server_others.go:148] Using iptables Proxier.
W0508 03:56:56.788630 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0508 03:56:56.788671 1 server_others.go:178] Tearing down inactive rules.
I0508 03:56:57.093941 1 server.go:464] Version: v1.13.3
I0508 03:56:57.211061 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0508 03:56:57.211188 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0508 03:56:57.211386 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0508 03:56:57.211573 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0508 03:56:57.211811 1 config.go:202] Starting service config controller
I0508 03:56:57.211964 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0508 03:56:57.212961 1 config.go:102] Starting endpoints config controller
I0508 03:56:57.213082 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0508 03:56:57.313640 1 controller_utils.go:1034] Caches are synced for service config controller
I0508 03:56:57.313683 1 controller_utils.go:1034] Caches are synced for endpoints config controller

==> kube-scheduler <==
I0508 03:56:43.303601 1 serving.go:319] Generated self-signed cert in-memory
W0508 03:56:45.081952 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0508 03:56:45.082028 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0508 03:56:45.082051 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0508 03:56:45.239033 1 server.go:142] Version: v1.14.0
I0508 03:56:45.239442 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0508 03:56:45.241058 1 authorization.go:47] Authorization is disabled
W0508 03:56:45.241183 1 authentication.go:55] Authentication is disabled
I0508 03:56:45.241428 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0508 03:56:45.244650 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0508 03:56:50.460832 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0508 03:56:50.561492 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0508 03:56:50.561584 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0508 03:57:05.840019 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-05-08 03:55:55 UTC, end at Wed 2019-05-08 04:11:55 UTC. --
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.747963 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamconfig" (UniqueName: "kubernetes.io/empty-dir/2e41cf60-6fbb-11e9-ad84-080027bcc7ea-isamconfig") pod "isamdsc-7c65fcdf84-hdt8v" (UID: "2e41cf60-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748008 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamdsc-logs" (UniqueName: "kubernetes.io/empty-dir/2e41cf60-6fbb-11e9-ad84-080027bcc7ea-isamdsc-logs") pod "isamdsc-7c65fcdf84-hdt8v" (UID: "2e41cf60-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748074 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamconfig" (UniqueName: "kubernetes.io/empty-dir/f959811c-3f17-11e9-aa65-080027bcc7ea-isamconfig") pod "isamwrpop1-5c4f8b6b6c-6fs2p" (UID: "f959811c-3f17-11e9-aa65-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748094 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f8669ed6-3ef9-11e9-94d2-080027bcc7ea-tmp") pod "storage-provisioner" (UID: "f8669ed6-3ef9-11e9-94d2-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748152 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-25pkc" (UniqueName: "kubernetes.io/secret/2e41cf60-6fbb-11e9-ad84-080027bcc7ea-default-token-25pkc") pod "isamdsc-7c65fcdf84-hdt8v" (UID: "2e41cf60-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748180 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamwrprp1-logs" (UniqueName: "kubernetes.io/empty-dir/2e0e15f0-6fbb-11e9-ad84-080027bcc7ea-isamwrprp1-logs") pod "isamwrprp1-6989b8f57-zdqld" (UID: "2e0e15f0-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748442 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamconfig-logs" (UniqueName: "kubernetes.io/empty-dir/2dfd55d2-6fbb-11e9-ad84-080027bcc7ea-isamconfig-logs") pod "isamconfig-598d76dd87-rhkwf" (UID: "2dfd55d2-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748479 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/17044429-43eb-11e9-8a02-080027bcc7ea-kube-proxy") pod "kube-proxy-szfwd" (UID: "17044429-43eb-11e9-8a02-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748503 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/17044429-43eb-11e9-8a02-080027bcc7ea-xtables-lock") pod "kube-proxy-szfwd" (UID: "17044429-43eb-11e9-8a02-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748542 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-25pkc" (UniqueName: "kubernetes.io/secret/2dfd55d2-6fbb-11e9-ad84-080027bcc7ea-default-token-25pkc") pod "isamconfig-598d76dd87-rhkwf" (UID: "2dfd55d2-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748568 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-2zwtc" (UniqueName: "kubernetes.io/secret/0ca14073-3f02-11e9-8c66-080027bcc7ea-default-token-2zwtc") pod "default-http-backend-5ff9d456ff-kr8mx" (UID: "0ca14073-3f02-11e9-8c66-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748584 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamruntime-logs" (UniqueName: "kubernetes.io/empty-dir/2e250063-6fbb-11e9-ad84-080027bcc7ea-isamruntime-logs") pod "isamruntime-5f8468cc7c-8rxts" (UID: "2e250063-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748598 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamconfig" (UniqueName: "kubernetes.io/empty-dir/2e0e15f0-6fbb-11e9-ad84-080027bcc7ea-isamconfig") pod "isamwrprp1-6989b8f57-zdqld" (UID: "2e0e15f0-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748617 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2dd022b5-6fbb-11e9-ad84-080027bcc7ea" (UniqueName: "kubernetes.io/host-path/2dfd55d2-6fbb-11e9-ad84-080027bcc7ea-pvc-2dd022b5-6fbb-11e9-ad84-080027bcc7ea") pod "isamconfig-598d76dd87-rhkwf" (UID: "2dfd55d2-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748632 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-9b6cc" (UniqueName: "kubernetes.io/secret/f8669ed6-3ef9-11e9-94d2-080027bcc7ea-storage-provisioner-token-9b6cc") pod "storage-provisioner" (UID: "f8669ed6-3ef9-11e9-94d2-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: I0508 03:56:52.748647 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "isamconfig" (UniqueName: "kubernetes.io/empty-dir/2e250063-6fbb-11e9-ad84-080027bcc7ea-isamconfig") pod "isamruntime-5f8468cc7c-8rxts" (UID: "2e250063-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.996154 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997113 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997169 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997191 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r80049cac61ab4913bd16e15274861d53.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997334 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997369 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997384 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997399 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r30a969b028d246178c301a5ffcf0cac1.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997575 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997606 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997625 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope: no such file or directory
May 08 03:56:52 minikube kubelet[3022]: W0508 03:56:52.997679 3022 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rdf8fe4d7e852475890f96eb1bfd12f2c.scope: no such file or directory
May 08 03:56:53 minikube kubelet[3022]: I0508 03:56:53.573147 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2dc53dcb-6fbb-11e9-ad84-080027bcc7ea" (UniqueName: "kubernetes.io/host-path/2ddf502f-6fbb-11e9-ad84-080027bcc7ea-pvc-2dc53dcb-6fbb-11e9-ad84-080027bcc7ea") pod "openldap-7f69f9b9b9-29mvl" (UID: "2ddf502f-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:54 minikube kubelet[3022]: W0508 03:56:54.195735 3022 pod_container_deletor.go:75] Container "7088451f3cc6ca7ad3aa4a05908b691702822cd213ae77d0c2b4a4cc30e4f5c2" not found in pod's containers
May 08 03:56:54 minikube kubelet[3022]: I0508 03:56:54.391590 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2dca0bd2-6fbb-11e9-ad84-080027bcc7ea" (UniqueName: "kubernetes.io/host-path/2ddf502f-6fbb-11e9-ad84-080027bcc7ea-pvc-2dca0bd2-6fbb-11e9-ad84-080027bcc7ea") pod "openldap-7f69f9b9b9-29mvl" (UID: "2ddf502f-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:54 minikube kubelet[3022]: W0508 03:56:54.586962 3022 pod_container_deletor.go:75] Container "4de44e4fc6669f1c382a8933b7b9ce251b0f6471009eeb0f62a28f8aad0cca3d" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: I0508 03:56:55.198125 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-2dcd25d2-6fbb-11e9-ad84-080027bcc7ea" (UniqueName: "kubernetes.io/host-path/2ddf502f-6fbb-11e9-ad84-080027bcc7ea-pvc-2dcd25d2-6fbb-11e9-ad84-080027bcc7ea") pod "openldap-7f69f9b9b9-29mvl" (UID: "2ddf502f-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:55 minikube kubelet[3022]: I0508 03:56:55.198178 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-25pkc" (UniqueName: "kubernetes.io/secret/2ddf502f-6fbb-11e9-ad84-080027bcc7ea-default-token-25pkc") pod "openldap-7f69f9b9b9-29mvl" (UID: "2ddf502f-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:55 minikube kubelet[3022]: I0508 03:56:55.198769 3022 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "openldap-keys" (UniqueName: "kubernetes.io/secret/2ddf502f-6fbb-11e9-ad84-080027bcc7ea-openldap-keys") pod "openldap-7f69f9b9b9-29mvl" (UID: "2ddf502f-6fbb-11e9-ad84-080027bcc7ea")
May 08 03:56:55 minikube kubelet[3022]: I0508 03:56:55.198848 3022 reconciler.go:154] Reconciler: start to sync state
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.427009 3022 pod_container_deletor.go:75] Container "8c26127b9a2777033f7598243f667d80d4c22afe4bcdcc90f27a9cc4f96d52db" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.476865 3022 pod_container_deletor.go:75] Container "cbd1372d762d688ee47953f281b043d4873f129bbcdb9f90f83c803552a7bf8a" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.512172 3022 pod_container_deletor.go:75] Container "205755e7c8e7b65f4c8ed75eadd278423f384c4f0e570faf8167024a70d6bd61" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.530829 3022 pod_container_deletor.go:75] Container "e1284ca7c18651a5f3eb685666e8eb550a31b4bf23e3075fb1c584fba7f39fd7" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.546005 3022 pod_container_deletor.go:75] Container "0022407db7cce1da1dc3140d685ae83daa384cf6bc0149d933c4fed810a7c097" not found in pod's containers
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.624979 3022 container.go:409] Failed to create summary reader for "/system.slice/run-r90f5d74681334a70b64bfd03f5c5e37d.scope": none of the resources are being tracked.
May 08 03:56:55 minikube kubelet[3022]: W0508 03:56:55.625188 3022 container.go:409] Failed to create summary reader for "/system.slice/run-rbc730d00595f42568f5c225b4cbc8791.scope": none of the resources are being tracked.
May 08 03:56:56 minikube kubelet[3022]: E0508 03:56:56.779760 3022 pod_workers.go:190] Error syncing pod f8669ed6-3ef9-11e9-94d2-080027bcc7ea ("storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"
May 08 03:56:57 minikube kubelet[3022]: W0508 03:56:57.859075 3022 pod_container_deletor.go:75] Container "76137c843d24705ab099095e154f3855b9c8ba933718e254c3905cfe81fc3bb9" not found in pod's containers
May 08 03:56:58 minikube kubelet[3022]: E0508 03:56:58.944895 3022 pod_workers.go:190] Error syncing pod f8669ed6-3ef9-11e9-94d2-080027bcc7ea ("storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"
May 08 03:57:00 minikube kubelet[3022]: E0508 03:57:00.049341 3022 pod_workers.go:190] Error syncing pod f8669ed6-3ef9-11e9-94d2-080027bcc7ea ("storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8669ed6-3ef9-11e9-94d2-080027bcc7ea)"
May 08 03:57:25 minikube kubelet[3022]: E0508 03:57:25.544919 3022 pod_workers.go:190] Error syncing pod c4b2a63b-67d5-11e9-95f2-080027bcc7ea ("kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system(c4b2a63b-67d5-11e9-95f2-080027bcc7ea)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system(c4b2a63b-67d5-11e9-95f2-080027bcc7ea)"
May 08 03:57:32 minikube kubelet[3022]: E0508 03:57:32.691438 3022 pod_workers.go:190] Error syncing pod c4b2a63b-67d5-11e9-95f2-080027bcc7ea ("kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system(c4b2a63b-67d5-11e9-95f2-080027bcc7ea)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system(c4b2a63b-67d5-11e9-95f2-080027bcc7ea)"
May 08 04:11:40 minikube kubelet[3022]: W0508 04:11:40.528628 3022 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 174717 (175087)

==> kubernetes-dashboard <==
2019/05/08 03:57:43 Starting overwatch
2019/05/08 03:57:43 Using in-cluster config to connect to apiserver
2019/05/08 03:57:43 Using service account token for csrf signing
2019/05/08 03:57:43 Successful initial request to the apiserver, version: v1.14.0
2019/05/08 03:57:43 Generating JWE encryption key
2019/05/08 03:57:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/05/08 03:57:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/05/08 03:57:43 Initializing JWE encryption key from synchronized object
2019/05/08 03:57:43 Creating in-cluster Heapster client
2019/05/08 03:57:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 03:57:44 Serving insecurely on HTTP port: 9090
2019/05/08 03:58:13 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 03:58:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 03:59:13 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 03:59:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:00:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:01:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:01:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:02:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:02:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:03:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:03:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:04:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:04:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:05:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:05:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:06:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:06:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:07:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:07:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:08:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:08:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:09:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:09:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:10:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:10:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:11:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/05/08 04:11:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> storage-provisioner <==

The operating system version:

MacOS Mojave 10.14.4

@Th3G4mbl3r
Copy link
Author

Output of docker ps inside the minikube machine (minikube ssh):

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
fd41c8236a99 f9aed6605b81 "/dashboard --insecu…" About an hour ago Up About an hour
k8s_kubernetes-dashboard_kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system_c4b2a63b-67d5-11e9-95f2-080027bcc7ea_8
9621cd9c6953 42d47fe0c78f "/usr/bin/dumb-init …" About an hour ago Up About an hour
k8s_nginx-ingress-controller_nginx-ingress-controller-586cdc477c-6cwfd_kube-system_8a7a84b7-67d5-11e9-95f2-080027bcc7ea_6
5c08f2a146a0 4689081edb10 "/storage-provisioner" About an hour ago Up About an hour
k8s_storage-provisioner_storage-provisioner_kube-system_f8669ed6-3ef9-11e9-94d2-080027bcc7ea_51
29b1372d256e d1c81c9ff265 "/container/tool/run…" About an hour ago Up About an hour k8s_openldap_openldap-7f69f9b9b9-29mvl_default_2ddf502f-6fbb-11e9-ad84-080027bcc7ea_3
9a218c1aac7a 846921f0fe0e "/server" About an hour ago Up About an hour k8s_default-http-backend_default-http-backend-5ff9d456ff-kr8mx_kube-system_0ca14073-3f02-11e9-8c66-080027bcc7ea_24
76137c843d24 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_openldap-7f69f9b9b9-29mvl_default_2ddf502f-6fbb-11e9-ad84-080027bcc7ea_3
a145c352e6cd 3907da302d13 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_isamconfig_isamconfig-598d76dd87-rhkwf_default_2dfd55d2-6fbb-11e9-ad84-080027bcc7ea_3
ecd443581c10 3907da302d13 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_isamruntime_isamruntime-5f8468cc7c-8rxts_default_2e250063-6fbb-11e9-ad84-080027bcc7ea_3
e232d652f1c2 3907da302d13 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_isamdsc_isamdsc-7c65fcdf84-hdt8v_default_2e41cf60-6fbb-11e9-ad84-080027bcc7ea_3
5f172e303eec 3907da302d13 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_isamwrprp1_isamwrprp1-6989b8f57-zdqld_default_2e0e15f0-6fbb-11e9-ad84-080027bcc7ea_3
38e7bb3c897e 98db19758ad4 "/usr/local/bin/kube…" About an hour ago Up About an hour k8s_kube-proxy_kube-proxy-szfwd_kube-system_17044429-43eb-11e9-8a02-080027bcc7ea_11
9da1b60d1e99 3907da302d13 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_isamwrpop1_isamwrpop1-5c4f8b6b6c-6fs2p_default_f959811c-3f17-11e9-aa65-080027bcc7ea_22
7088451f3cc6 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-proxy-szfwd_kube-system_17044429-43eb-11e9-8a02-080027bcc7ea_11
8c26127b9a27 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_default-http-backend-5ff9d456ff-kr8mx_kube-system_0ca14073-3f02-11e9-8c66-080027bcc7ea_24
0022407db7cc k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_storage-provisioner_kube-system_f8669ed6-3ef9-11e9-94d2-080027bcc7ea_25
cbd1372d762d k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_isamdsc-7c65fcdf84-hdt8v_default_2e41cf60-6fbb-11e9-ad84-080027bcc7ea_3
e1284ca7c186 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_isamconfig-598d76dd87-rhkwf_default_2dfd55d2-6fbb-11e9-ad84-080027bcc7ea_3
4de44e4fc666 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_isamruntime-5f8468cc7c-8rxts_default_2e250063-6fbb-11e9-ad84-080027bcc7ea_3
205755e7c8e7 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_isamwrprp1-6989b8f57-zdqld_default_2e0e15f0-6fbb-11e9-ad84-080027bcc7ea_3
b8b03cbd9752 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kubernetes-dashboard-79dd6bfc48-7qp6w_kube-system_c4b2a63b-67d5-11e9-95f2-080027bcc7ea_4
eb3deb551e77 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_isamwrpop1-5c4f8b6b6c-6fs2p_default_f959811c-3f17-11e9-aa65-080027bcc7ea_22
be9bf8e360a1 f59dcacceff4 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-86c58d9df4-xxkwf_kube-system_f764e77e-3ef9-11e9-94d2-080027bcc7ea_25
0318366c9a4b f59dcacceff4 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-86c58d9df4-ngvfw_kube-system_f76745fa-3ef9-11e9-94d2-080027bcc7ea_25
ded11b6f189b k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_coredns-86c58d9df4-xxkwf_kube-system_f764e77e-3ef9-11e9-94d2-080027bcc7ea_25
ec90bd5958fc k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_coredns-86c58d9df4-ngvfw_kube-system_f76745fa-3ef9-11e9-94d2-080027bcc7ea_25
3eef029f9629 e3482ff39195 "/sbin/bootstrap.sh" About an hour ago Up About an hour k8s_postgresql_postgresql-6b497c7d89-p9wxg_default_2decc7cb-6fbb-11e9-ad84-080027bcc7ea_3
12f93c72b3b0 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_postgresql-6b497c7d89-p9wxg_default_2decc7cb-6fbb-11e9-ad84-080027bcc7ea_3
2d4ae06241b3 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:18080->18080/tcp k8s_POD_nginx-ingress-controller-586cdc477c-6cwfd_kube-system_8a7a84b7-67d5-11e9-95f2-080027bcc7ea_4
2bf72d0c5b0e b95b1efa0436 "kube-controller-man…" About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_4
20210eb1b0c5 2c4adeb21b4f "etcd --advertise-cl…" About an hour ago Up About an hour k8s_etcd_etcd-minikube_kube-system_18c827a17f0a6b507c2029890cd786ad_4
fe5cb72d898f 00638a24688b "kube-scheduler --bi…" About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_4
fc2f187039f2 119701e77cbc "/opt/kube-addons.sh" About an hour ago Up About an hour k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_4
816153017a9a ecf910f40d6e "kube-apiserver --ad…" About an hour ago Up About an hour k8s_kube-apiserver_kube-apiserver-minikube_kube-system_023cdc77988402bd2101e9dc50c78f18_4
d18155c31f48 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_etcd-minikube_kube-system_18c827a17f0a6b507c2029890cd786ad_4
258d9feceb05 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_4
63106ebc188e k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-minikube_kube-system_58272442e226c838b193bbba4c44091e_4
e1c8e4c97095 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-minikube_kube-system_2899d819dcdb72186fb15d30a0cc5a71_4
c509567f6992 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-minikube_kube-system_023cdc77988402bd2101e9dc50c78f18_4

@tstromberg tstromberg changed the title MacOS Mojave - Minikube start failed - kube-apiserver failed. restart: waiting for component=kube-apiserver: timed out waiting for the condition May 14, 2019
@tstromberg tstromberg added co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver labels May 14, 2019
@tstromberg
Copy link
Contributor

I honestly can't figure out why this would have happened.

In the mean time, you can probably clear this up by using minikube delete

@tstromberg
Copy link
Contributor

I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube and run minikube delete to remove the previous cluster state.

If the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver
Projects
None yet
Development

No branches or pull requests

2 participants