-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.29.6+k3s2: 502 bad gateway when trying to get pod logs #10474
Comments
What is the IP in this error message? Are you by any chance setting the --bind-address for your nodes? If so, this sounds like a duplicate of #10444
Can you provide an example of that? The error above is from the apiserver attempting to make a request of the kubelet, not the control-plane. Please provide the output of |
Confirmed, yes I am.
This is probably a misinterpretation on my part: I have a gitlab runner that spins up new pods for jobs, so I suspect it's running into the same issue with the kubelet, but I must confess I'm not an expert on how that bit works. I think we can safely call this a dupe of #10444, thanks @brandond. |
Can you help us understand the use case for setting the --bind-address on your nodes? By default the kubelet binds to 0.0.0.0/[::], if you restrict it to binding on a specific interface then it is no longer listening on the loopback address, which several internal components expect it to be doing. |
Yeah it's trivial, really. Each node has two NICs. One is public, while the other is only reachable over a VPN. We want as little available on the public internet as possible, so we try to bind as much administrative stuff as we can to the private interface. Things might be different if we had an external firewall sitting in front of the whole cluster where I could make sure the control plane wasn't accessible, but that's not the way we're setup today. Also, there would still be an argument to be made about defense in depth. In case it's not obvious, I would take no issue with things continuing to listen on loopback for this use-case, if that's something you're considering. |
Environmental Info:
K3s Version:
v1.29.6+k3s2
Node(s) CPU architecture, OS, and Version:
Linux s1 6.1.0-22-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.94-1 (2024-06-21) x86_64 GNU/Linux
Cluster Configuration:
3 servers, dual stack networking (ipv6 the primary)
Describe the bug:
It's still taking shape, but I'm unable to get logs from any pod:
I'm seeing similar errors from operators; attempts to talk to the control plane result in a 502.
Steps To Reproduce:
I was running v1.29.5+k3s1. Upgraded to v1.29.6+k3s2 and here we are. Looks like I'll be rolling back to v1.29.5+k3s1 for the second time (v1.29.6+k3s1 was a misfire too, due to #10419).
The text was updated successfully, but these errors were encountered: