-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS resolution seems broken with nftables
and forwardKubeDNSToHost
in development version
#9196
Comments
Please attach 10.96.0.9 is the address of the |
Oops, sorry, misread your post. You run with host networking, and |
I was able to reproduce this issue |
nftables
and forwardKubeDNSToHost
in development verisionnftables
and forwardKubeDNSToHost
in development version
This is an attempt to fix many issues related with trying to use Service IP for host DNS. Fixes siderolabs#9196 Signed-off-by: Andrey Smirnov <[email protected]>
This is an attempt to fix many issues related with trying to use Service IP for host DNS. Fixes siderolabs#9196 Signed-off-by: Andrey Smirnov <[email protected]>
This is an attempt to fix many issues related with trying to use Service IP for host DNS. Fixes siderolabs#9196 Signed-off-by: Andrey Smirnov <[email protected]>
Bug Report
Description
It seems that introduction of
nftables
being used by default in the current1.8
development branch (triedv1.8.0-alpha.1-81-ge193e7db9
) - breaks theforwardKubeDNSToHost
. Here's what I did:alpine:3.20
pod withhostNetwork: true
solely to runnslookup
:As can be seen - the address of the
host-dns
service is correct (the 9th ip addr), but no resolution happens. Though, network connectivity seems to be working fine, as simply pointing to a different nameserver works, e.g here is an answer from the Cloudflare NS:What's interesting here, is that disabling
forwardKubeDNSToHost
brings the DNS resolution into functioning state, but that's doesn't seem like a solution.So, I've started to play with different things, an accidentally tried to run
kube-proxy
withiptables
backend instead, as before. I've reprovisioned the control-plane node with a single change in it's config file:After this change - DNS resolution worked:
Thus, I suspect that something isn't fully configured with the latest
nftables
rules or maybe it's kube-proxy mangling something.Logs
Logs don't contain any suspicious errors or warnings. Cluster boostraps just fine, within 30 seconds, and node is marked ready.
Environment
ami-0e29f054809ce5025 (talos-v1.8.0-alpha.1-81-ge193e7db9-us-east-2-arm64)
The text was updated successfully, but these errors were encountered: