Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flannel-v6.1 MAC address changes every boot #9957

Closed
kyrofa opened this issue Apr 16, 2024 · 19 comments
Closed

flannel-v6.1 MAC address changes every boot #9957

kyrofa opened this issue Apr 16, 2024 · 19 comments
Assignees
Labels
Milestone

Comments

@kyrofa
Copy link

kyrofa commented Apr 16, 2024

Environmental Info:
K3s Version:

$ k3s -v
k3s version v1.28.8+k3s1 (653dd61a)
go version go1.21.8

Node(s) CPU architecture, OS, and Version:

$ uname -a
Linux s1 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/Linux

Cluster Configuration:
3 server cluster, dual stack (ipv4 and ipv6). Each node has two NICs, one public, one private. Using flannel with the vxlan backend.

Describe the bug:
Whenever one of my nodes reboots (node A), the flannel.1 interface's MAC address stays the same, but the flannel-v6.1 interface's MAC address changes. This leads to the flannel network being broken, where both other node's (B and C) believe A to be reachable via its old MAC, but it's not. As a result, node A cannot ping the flannel-v6.1 interfaces on either nodes B or C, or vice-versa (while flannel.1 pings work just fine).

The problem is two-fold:

  1. On nodes B and C, ip -6 neighbor show | grep <node A's flannel-v6.1 IP> has the old MAC address
  2. On nodes B and C, bridge fdb show dev flannel-v6.1 | grep <node A's node-ip> has the old MAC address

I'm able to work around this with the following whenever a node reboots:

  1. On all non-rebooted nodes, run ip -6 neighbor change <flannel-v6.1 IP> dev flannel-v6.1 lladdr <new mac>
  2. On all non-rebooted nodes, run bridge fdb add to <new mac> dst <rebooted node's node-ip> dev flannel-v6.1

I've been running k3s for about a year now. This is definitely the first time this has happened. It's been a little while since I rebooted though, and I update k3s whenever an update comes out the stable channel (haven't done the v1.29 update yet due to this issue), so I suspect this regression was introduced recently.

@brandond
Copy link
Member

cc @manuelbuil.

This may belong in https://github.com/flannel-io/flannel

@manuelbuil
Copy link
Contributor

Thanks for reporting this! I don't think this is a regression. Flannel was picking a new mac address for the vxlan interface in each reboot and this was fixed with this PR: flannel-io/flannel#1829. But it seems the user missed to add the same logic to the v6 interface

@manuelbuil
Copy link
Contributor

flannel-io/flannel#1946

@kyrofa
Copy link
Author

kyrofa commented Apr 17, 2024

Thank you for making that PR, @manuelbuil. Just for my understanding, can you expand on how this isn't a regression? i.e. how have I never experienced this problem? By "problem" I mean the broken flannel network, not the changing MAC. I just assumed the broken network was due to the changing MAC, but it looks like prior to flannel-io/flannel#1829 both interfaces' MAC should have been changing, which means that either I (and everyone else using flannel) should have experienced this issue on every reboot I've done, or I'm missing something.

Do you expect flannel to actually be able to handle changing MAC addresses? If so, that functionality appears to have broken somehow. Did k3s change the config to make the interfaces non-learning, perhaps? That might be worth looking into, although once your PR lands it looks like neither interface should be changing anymore.

@manuelbuil
Copy link
Contributor

Thank you for making that PR, @manuelbuil. Just for my understanding, can you expand on how this isn't a regression? i.e. how have I never experienced this problem? By "problem" I mean the broken flannel network, not the changing MAC. I just assumed the broken network was due to the changing MAC, but it looks like prior to flannel-io/flannel#1829 both interfaces' MAC should have been changing, which means that either I (and everyone else using flannel) should have experienced this issue on every reboot I've done, or I'm missing something.

Do you expect flannel to actually be able to handle changing MAC addresses? If so, that functionality appears to have broken somehow. Did k3s change the config to make the interfaces non-learning, perhaps? That might be worth looking into, although once your PR lands it looks like neither interface should be changing anymore.

My understanding is that before the user's PR, MAC addresses were changing in each reboot. I don't think K3s is changing any default kernel behaviour, so yes, the bug should have been present in K3s. Maybe linux networking components were able to re-learn the new MAC address quickly except in certain environments? It could be a nice investigation to do, I agree

@kyrofa
Copy link
Author

kyrofa commented Apr 18, 2024

Huh, how odd. Doesn't really matter I guess, your flannel PR will fix the issue. Any idea when that will be contained in a k3s release?

@manuelbuil
Copy link
Contributor

Huh, how odd. Doesn't really matter I guess, your flannel PR will fix the issue. Any idea when that will be contained in a k3s release?

It should be included in the May release. We are currently under code freeze for the April release

@sergitron
Copy link

sergitron commented Apr 23, 2024

Hi @manuelbuil Is there a potential workaround for Canal on RHEL 8.8 until release? A previous issue had recommended updating flannel config to macaddresspolicy to none but not sure if this would work as /etc/systemd/network doesn't exist on my RHEL nodes and this is an RKE2 system running Canal. We are using IPv6 as our primary pod to pod traffic.

cat<<'EOF'>/etc/systemd/network/10-flannel.link
[Match]
OriginalName=flannel*

[Link]
MACAddressPolicy=none
EOF

flannel-io/flannel#1155

@manuelbuil
Copy link
Contributor

Hi @manuelbuil Is there a potential workaround for Canal on RHEL 8.8 until release? A previous issue had recommended updating flannel config to macaddresspolicy to none but not sure if this would work as /etc/systemd/network doesn't exist on my RHEL nodes and this is an RKE2 system running Canal. We are using IPv6 as our primary pod to pod traffic.

cat<<'EOF'>/etc/systemd/network/10-flannel.link [Match] OriginalName=flannel*

[Link] MACAddressPolicy=none EOF

flannel-io/flannel#1155

Yes, you could use the new flannel image once it is ready. We are waiting on one extra PR to be merged in Flannel and then we will release v0.25.2 with the fix

@kyrofa
Copy link
Author

kyrofa commented Apr 29, 2024

Do you expect flannel to actually be able to handle changing MAC addresses? If so, that functionality appears to have broken somehow. Did k3s change the config to make the interfaces non-learning, perhaps?

This sounds almost certainly related to #9807.

@kyrofa
Copy link
Author

kyrofa commented Apr 29, 2024

Nope, I'm wrong. I just updated to v1.29.4+k3s1 and this issue persists. Huh, very confusing. Oh well, I'll just hold out for @manuelbuil's fix to be contained in a release.

@manuelbuil
Copy link
Contributor

v1.29.4+k3s1 does not include the fix. The fix was released when we were on code freeze. The fix will be included in the following release

@brandond brandond added this to the v1.30.1+k3s1 milestone Apr 30, 2024
@kyrofa
Copy link
Author

kyrofa commented Apr 30, 2024

Right, I meant whether or not #9807 was related, the fix for which was contained in v1.29.4+k3s1.

@brandond
Copy link
Member

No, that is an issue with kube-router's netpol controller. This is a flannel issue. Different components.

@kyrofa
Copy link
Author

kyrofa commented Apr 30, 2024

Good point. I'm grasping at straws, obviously :) . I just don't like not understanding what broke here, heh.

@sergitron
Copy link

@brandond or @manuelbuil
Will this fix make be backported to 1.28.x and.or 1.29.x environments? Any estimates on the typical timeline to when those builds to downstream projects like hardened flannel for RKE2 would be available?

@manuelbuil
Copy link
Contributor

@brandond or @manuelbuil Will this fix make be backported to 1.28.x and.or 1.29.x environments? Any estimates on the typical timeline to when those builds to downstream projects like hardened flannel for RKE2 would be available?

The idea is to include it in 1.29, 1.28 and 1.27. Same for RKE2

@manuelbuil
Copy link
Contributor

@fmoral2
Copy link
Contributor

fmoral2 commented May 29, 2024

Validated on Version:

-$ k3s version v1.30.1+k3s-f2e7c01a (f2e7c01a)



Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
Ubuntu
AMD
DUAL STACK

Cluster Configuration:
-3 nodes

Steps to validate the fix

  1. start k3s with flannel-backend: vxlan
  2. reboot node A
  3. Validate that flannel ipv6 mac addrs is not changed
  4. Validate other nodes can communicate
  5. Validate nodes and pods

Reproduction Issue:

 
k3s version v1.30.1+k3s-aadec855 (aadec855)
 

 ip link show flannel-v6.1
 flannel-v6.1 MAC address: [redacted]-1

 reboot
ip link show flannel-v6.1
 flannel-v6.1 MAC address: [redacted]-23


NODE B AND C:
 $ ping6 0000:cafe:00::

no connection

Validation Results:

  ip link show flannel-v6.1
 flannel-v6.1 MAC address: [redacted]-1f

 reboot
ip link show flannel-v6.1
 flannel-v6.1 MAC address: [redacted]-1f


 NODE B and C:

 $ ping6 0000:cafe:00::
 
 
--- 0000:cafe:00:: ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2009ms
rtt min/avg/max/mdev = 0.248/0.526/0.992/0.331 ms

 $ kubectl get nodes,pods -A
NAME                                                STATUS   ROLES                       AGE   VERSION
node/ip- .us-west-1.compute.internal     Ready    <none>                      22m   v1.30.1+k3s-f2e7c01a
node/ip- .us-west-1.compute.internal    Ready    control-plane,etcd,master   25m   v1.30.1+k3s-f2e7c01a
node/ip- .us-west-1.compute.internal   Ready    control-plane,etcd,master   28m   v1.30.1+k3s-f2e7c01a
node/ip- .us-west-1.compute.internal   Ready    control-plane,etcd,master   25m   v1.30.1+k3s-f2e7c01a

NAMESPACE     NAME                                                 READY   STATUS      RESTARTS      AGE
default       pod/dualstack-ing-ds-6wwg7                           1/1     Running     1 (12m ago)   22m
default       pod/dualstack-ing-ds-cw2np                           1/1     Running     0             22m
default       pod/dualstack-ing-ds-mgc58                           1/1     Running     0             22m
default       pod/dualstack-ing-ds-qztqq                           1/1     Running     0             22m
default       pod/dualstack-nodeport-deployment-854668b77d-cz5nn   1/1     Running     0             21m
default       pod/dualstack-nodeport-deployment-854668b77d-kn8qk   1/1     Running     0             21m
default       pod/dualstack-nodeport-deployment-854668b77d-mlbx6   1/1     Running     0             21m
default       pod/dualstack-nodeport-deployment-854668b77d-pkjws   1/1     Running     1 (12m ago)   21m
default       pod/httpd-deployment-5847646c76-bs2fj                1/1     Running     0             19m
default       pod/httpd-deployment-5847646c76-qrwz9                1/1     Running     0             19m
kube-system   pod/coredns-576bfc4dc7-2jlcz                         1/1     Running     1 (12m ago)   27m
kube-system   pod/helm-install-traefik-8f6m2                       0/1     Completed   1             27m
kube-system   pod/helm-install-traefik-crd-xjskm                   0/1     Completed   0             27m
kube-system   pod/local-path-provisioner-75bb9ff978-h6d5k          1/1     Running     1 (12m ago)   27m
kube-system   pod/metrics-server-557ff575fb-lztjd                  1/1     Running     1 (12m ago)   27m
kube-system   pod/svclb-traefik-6cf38469-4dmbf                     2/2     Running     0             22m
kube-system   pod/svclb-traefik-6cf38469-6w4p9                     2/2     Running     0             25m
kube-system   pod/svclb-traefik-6cf38469-77s7b                     2/2     Running     2 (12m ago)   26m
kube-system   pod/svclb-traefik-6cf38469-b74g8                     2/2     Running     0             25m
kube-system   pod/traefik-5fb479b77-q5f46                          1/1     Running     1 (12m ago)   26m

@fmoral2 fmoral2 closed this as completed May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Archived in project
Development

No branches or pull requests

6 participants