-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client configuration is failing #16
Comments
Hi @meekley is the issue transient in nature or can be reproduced each time you are deploying the lab? based on the error shared it seems like the team0 interface is not up or not created, can you share the status of the eth and team0 interface from the client when you see this issue ? docker exec -it clab-avdirb-client1 sudo ifconfig
docker exec -it clab-avdirb-client1 sudo teamdctl team0 state view
docker exec -it clab-avdirb-client1 sudo teamdctl team0 state dump Also if you could share the containerlab, docker and base host OS version details where you are seeing this issue. |
I am also seeing this issue - used the included dockerfile to build the alphine host with the apline:latest tag. From what I can tell it doesn't seem like the docker bootstrap config is even configuring the interface teams as those interfaces are not recognized when I try the commands to view the team. My containerlab version is Docker version 26.0.0, build 2ae903e my os is: |
Hi @jsholes in your instance are you not even seeing the eth interface on client getting added to the team0 ? would it be possible for you to share the following from the setup where you are still seeing this issue docker exec -it clab-avdirb-client1 sudo ifconfig
docker exec -it clab-avdirb-client1 sudo teamdctl team0 state view
docker exec -it clab-avdirb-client1 sudo teamdctl team0 state dump as an alternative you can use cEOS-lab instances as a substitute for the client. but that would require some additional configuration via CLI. I tried to replicate your setup and did not see any issues while creating the client configs DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian Client: Docker Engine - Community
Version: 26.0.0
API version: 1.45
Go version: go1.21.8
Git commit: 2ae903e
Built: Wed Mar 20 15:17:48 2024
OS/Arch: linux/amd64
Context: default containerlab version
____ ___ _ _ _____ _ ___ _ _ _____ ____ _ _
/ ___/ _ \| \ | |_ _|/ \ |_ _| \ | | ____| _ \| | __ _| |__
| | | | | | \| | | | / _ \ | || \| | _| | |_) | |/ _` | '_ \
| |__| |_| | |\ | | |/ ___ \ | || |\ | |___| _ <| | (_| | |_) |
\____\___/|_| \_| |_/_/ \_\___|_| \_|_____|_| \_\_|\__,_|_.__/
version: 0.53.0
commit: 7ce0afa2
date: 2024-03-25T16:31:05Z
source: https://github.com/srl-labs/containerlab
rel. notes: https://containerlab.dev/rn/0.53/ / $ ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:C8:C8:08
inet addr:172.200.200.8 Bcast:172.200.200.255 Mask:255.255.255.0
inet6 addr: 2001:172:200:200::b/80 Scope:Global
inet6 addr: fe80::42:acff:fec8:c808/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1701 errors:0 dropped:0 overruns:0 frame:0
TX packets:489 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:258723 (252.6 KiB) TX bytes:76382 (74.5 KiB)
eth1 Link encap:Ethernet HWaddr AA:C1:AB:85:4F:A4
inet6 addr: fe80::a8c1:abff:fe85:4fa4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:230 errors:0 dropped:0 overruns:0 frame:0
TX packets:518 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:27570 (26.9 KiB) TX bytes:64044 (62.5 KiB)
eth2 Link encap:Ethernet HWaddr AA:C1:AB:85:4F:A4
inet6 addr: fe80::a8c1:abff:feeb:badd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:226 errors:0 dropped:0 overruns:0 frame:0
TX packets:499 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:27022 (26.3 KiB) TX bytes:60988 (59.5 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
team0 Link encap:Ethernet HWaddr AA:C1:AB:85:4F:A4
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
team0.110 Link encap:Ethernet HWaddr AA:C1:AB:85:4F:A4
inet addr:10.1.10.101 Bcast:10.1.10.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ $ sudo teamdctl team0 state view
setup:
runner: lacp
ports:
eth1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
aggregator ID: 0
selected: no
state: defaulted
eth2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
aggregator ID: 0
selected: no
state: defaulted
runner:
active: yes
fast rate: yes |
Hi @UchihaItachiSama I cannot create the team0 nic.
|
Interesting - its failing on Ubuntu, but works fine on Amazon Linux. |
I'm running my lab on an Ubuntu 22 on AWS. |
@FloLaco in your case I see the error is This program is not intended to be run as root. Can you please run the same as root, and let us know if you see same issue? sudo teamd -d -r -f /home/alpine/teamd-lacp.conf |
@UchihaItachiSama It was run as run with sudo command. |
I ran into the same issue. the interface eth1 & 2 failed to come up 19: eth2@if20: <BROADCAST,MULTICAST,M-DOWN> mtu 9500 qdisc noqueue state DOWN peering cEOS interfaces also down leaf1a(config-if-Et5)#show interfaces eth5 / $ sudo teamd -k -f /home/alpine/teamd-lacp.conf / $ sudo teamd -d -r -f /home/alpine/teamd-lacp.conf |
client configuration does not work. looks like the interface teaming in the alpine container entrypoint script is failing
messages after running the l3_build.sh script
[INFO] Configuring clab-avdirb-client1
client1
vconfig: ioctl error for add: No such device
ifconfig: SIOCSIFADDR: No such device
ip: ioctl 0x8913 failed: No such device
ip: can't find device 'team0.110'
ifconfig: team0.110: error fetching interface information: Device not found
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.100.100.1 0.0.0.0 UG 0 0 0 eth0
172.100.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[INFO] Configuring clab-avdirb-client2
client2
vconfig: ioctl error for add: No such device
ifconfig: SIOCSIFADDR: No such device
ip: ioctl 0x8913 failed: No such device
ip: can't find device 'team0.111'
ifconfig: team0.111: error fetching interface information: Device not found
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.100.100.1 0.0.0.0 UG 0 0 0 eth0
172.100.100.0 0.0.0.0 255.255.255.0 U
The text was updated successfully, but these errors were encountered: