-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
calico-kube-controllers
fails to reach 10.96.0.1:443
on startup
#217
Comments
I am not sure if it should be located here or under |
Hi @Bolodya1997 , you're in the right place 🙂 Thanks for the detailed report, this is very helpful! I noticed you used a public IP for the apiserver ( If that doesn't fix the issue, it would be helpful if you could install calivppctl and attach the output of |
Thank you! |
Environment
v0.16.0-calicov3.20.0
.equinix.metal
, host nodes OS isUbuntu 20.04 LTS
.Control plane node
eno2
and worker nodeeno2
are in the same untagged VLAN.Issue description
After applying
calico-vpp-nohuge.yaml
with additionally configuredcalico-kube-controllers
loops in aCrashLoopBackOff
with the following error in logs:To Reproduce
Steps to reproduce the behavior:
n2.xlarge.x86
servers withUbuntu 20.04 LTS
on https://metal.equinix.com/.Ubuntu 20.04 LTS
.bond0
with147.75.38.85/31
for control plane node, with147.75.75.133/31
for worker node).eno2
with10.0.0.1/30
for control plane node, with10.0.0.2/30
for worker node).join-cluster.sh
script from control plane node to worker node and run it.10.0.0.1
as node IP:10.0.0.2
as node IP:~/.kube/config
from the control plane node to your own host.calico-vpp-nohuge.yaml
with:kubectl apply -f calico-vpp-nohuge.yaml
from your own host.Expected behavior
All calico pods should start running (probably with few restarts).
Additional context
weave
as CNI works correctly.10.96.0.1:443
:10.96.0.1:443
:hostNetwork: false
fromkube-system
is able to reach10.96.0.1:443
:coredns
pods fail to get ready with the following logs:The text was updated successfully, but these errors were encountered: