Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nmcli error with "invalid or not allowed setting 'ipv4'" when running the playbook the 2nd time #8558

Open
1 task done
rabin-io opened this issue Jun 25, 2024 · 5 comments · May be fixed by #8729
Open
1 task done
Labels
bug This issue/PR relates to a bug has_pr module module plugins plugin (any type)

Comments

@rabin-io
Copy link

rabin-io commented Jun 25, 2024

Summary

I can create a bond interface, with this example

    - name: Setup bond interface for - internal
      tags: [network]
      community.general.nmcli:
        conn_name: "{{ bond_name_internal }}"
        ifname: "{{ bond_name_internal }}"
        zone: internal
        type: bond
        mode: active-backup
        state: present
        mtu: 9000

It will run OK the first run, and I can also add the interfaces into it as well, but the 2nd time I run the playbook, I get this error message,

fatal: [localhost]: FAILED! => changed=false 
  msg: |-
    Error: invalid or not allowed setting 'ipv4': 'ipv4' not among [connection, bond, 802-3-ethernet (ethernet), ethtool, bridge-port, link, match].
  name: bond-internal
  rc: 2

Issue Type

Bug Report

Component Name

nmcli

Ansible Version

$ ansible --version


ansible [core 2.17.0]
  config file = /home/rabin/src/ansible-playbooks/ocp-agent-provision/ansible.cfg
  configured module search path = ['/home/rabin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/rabin/.local/lib/python3.12/site-packages/ansible
  ansible collection location = /home/rabin/src/ansible-playbooks/ocp-agent-provision/.ansible:/home/rabin/.ansible/collections
  executable location = /home/rabin/.local/bin/ansible
  python version = 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] (/usr/bin/python)
  jinja version = 3.1.3
  libyaml = True


### Configuration

```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all

OS / Environment

Fedora 40

Steps to Reproduce

- name: Debug
  # hosts: localhost
  hosts: service_host
  gather_facts: false
  vars:
    ansible_ssh_user: root
    bond_name_internal: bond-internal
    private_network_interface_name: internal
    private_network: "10.148.118.192"
    private_network_cidr: "26"

  tasks:

    - name: Setup bond interface for - internal
      tags: [network]
      community.general.nmcli:
        conn_name: "{{ bond_name_internal }}"
        zone: internal
        type: bond
        mode: active-backup
        state: present
        mtu: 9000

    - name: Add bond-slaves to {{ bond_name_internal }}
      tags: [network]
      community.general.nmcli:
        type: bond-slave
        slave_type: bond
        conn_name: "{{ item }}"
        ifname: "{{ item }}"
        master: "{{ bond_name_internal }}"
        state: present
        mtu: 9000
      loop:
        - int0
        - int2

    - name: Setup bridge - internal
      tags: [network]
      community.general.nmcli:
        conn_name: "{{ private_network_interface_name }}"
        ifname: "{{ private_network_interface_name }}"
        zone: internal
        type: bridge
        stp: false
        ip4: "{{ eth0_ip }}/{{ private_network_cidr }}"
        routes4:
          - 10.0.0.0/8 {{ private_network_gw }}
        state: present
        mtu: 9000
      vars:
        private_network_gw: "{{ (private_network ~ '/' ~ private_network_cidr) | ansible.utils.nthhost(1) }}"

    - name: Add bond to internal bridge
      tags: [network]
      community.general.nmcli:
        type: bridge-slave
        slave_type: bridge
        conn_name: "{{ bond_name_internal }}"
        ifname: "{{ bond_name_internal }}"
        master: "{{ private_network_interface_name }}"
        state: present
        mtu: 9000

Expected Results

No change for the 2nd run

Actual Results

Error with, 

fatal: [localhost]: FAILED! => changed=false 
  msg: |-
    Error: invalid or not allowed setting 'ipv4': 'ipv4' not among [connection, bond, 802-3-ethernet (ethernet), ethtool, bridge-port, link, match].
  name: bond-internal
  rc: 2

Code of Conduct

  • I agree to follow the Ansible Code of Conduct
@ansibullbot
Copy link
Collaborator

Files identified in the description:

If these files are incorrect, please update the component name section of the description or use the !component bot command.

click here for bot help

@ansibullbot
Copy link
Collaborator

@ansibullbot ansibullbot added bug This issue/PR relates to a bug module module plugins plugin (any type) labels Jun 25, 2024
joey-grant added a commit to joey-grant/community.general that referenced this issue Aug 7, 2024
By `bond` connections being included in the `ip_conn_type` list, the
resulting `nmcli` commands include references to `ipv4` and `ipv6`
settings. These options are not available to `bond` connection types as
stated by the error output in issue ansible-collections#8558.

Closes ansible-collections#8558
joey-grant added a commit to joey-grant/community.general that referenced this issue Aug 7, 2024
By `bond` connections being included in the `ip_conn_type` list, the
resulting `nmcli` commands include references to `ipv4` and `ipv6`
settings. These options are not available to `bond` connection types as
stated by the error output in issue ansible-collections#8558.

Closes ansible-collections#8558
@joey-grant joey-grant linked a pull request Aug 7, 2024 that will close this issue
@joey-grant
Copy link

Hey @rabin-io, I've tracked this issue down to the nmcli command produced by the plugin. Specifically, the first play in your example results in the following command:

nmcli con modify bond-internal connection.autoconnect yes ipv4.ignore-auto-dns no ip4.ignore-auto-routes no ipv4.never-default no ipv4.may-fail yes ipv6.ignore-auto-dns no ipv6.ignore-auto-routes no 802-3-ethernet.mtu 0 mode balance-rr

The issue here is that nmcli does not accept ipv4 and ipv6 parameters for connections of bond type. The fix seems fairly simple, though. See my pull request: #8729

@rabin-io
Copy link
Author

rabin-io commented Aug 7, 2024

Hey @joey-grant, thank you for looking into it, and creating the PR.
Does it mean that a bond can't have an IP?

@joey-grant
Copy link

joey-grant commented Aug 7, 2024

@rabin-io, actually the issue appears to be a bit different and my solution is 100% under-baked. Specifically, if we look at the nmcli commands produced under the hood, we see the following run:

/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli con add type bond con-name bond-internal connection.interface-name bond-internal connection.autoconnect yes connection.zone internal ipv4.ignore-auto-dns no ipv4.ignore-auto-routes no ipv4.never-default no ipv4.may-fail yes ipv6.ignore-auto-dns no ipv6.ignore-auto-routes no 802-3-ethernet.mtu 9000 mode active-backup
/usr/bin/nmcli con up bond-internal
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli con add type bond-slave con-name eth1 connection.interface-name eth1 connection.autoconnect yes 802-3-ethernet.mtu 9000 connection.master bond-internal connection.slave-type bond
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli con add type bond-slave con-name eth2 connection.interface-name eth2 connection.autoconnect yes 802-3-ethernet.mtu 9000 connection.master bond-internal connection.slave-type bond
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli con add type bridge con-name internal connection.interface-name internal connection.autoconnect yes connection.zone internal ipv4.addresses 192.168.121.202/26 ipv4.ignore-auto-dns no ipv4.ignore-auto-routes no ipv4.routes 10.0.0.0/8 10.148.118.193 ipv4.never-default no ipv4.method manual ipv4.may-fail yes ipv6.ignore-auto-dns no ipv6.ignore-auto-routes no bridge.ageing-time 300 bridge.forward-delay 15 bridge.hello-time 2 bridge.max-age 20 bridge.priority 128 bridge.stp no
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --show-secrets con show bond-internal

# after this command, the ipv4 and ipv6 options are no longer available
/usr/bin/nmcli con modify bond-internal connection.interface-name bond-internal connection.autoconnect yes connection.master internal connection.slave-type bridge bridge-port.path-cost 100 bridge-port.hairpin-mode no bridge-port.priority 32
/usr/bin/nmcli --fields name --terse con show


# start second ansible run, which fails
/usr/bin/nmcli --fields name --terse con show
/usr/bin/nmcli --show-secrets con show bond-internal
/usr/bin/nmcli con modify bond-internal connection.autoconnect yes connection.zone internal ipv4.ignore-auto-dns no ipv4.ignore-auto-routes no ipv4.never-default no ipv4.may-fail yes ipv6.ignore-auto-dns no ipv6.ignore-auto-routes no 802-3-ethernet.mtu 9000 mode active-backup
/usr/bin/nmcli --fields name --terse con show

I'm sure someone who has a stronger understanding of your specific use-case may be able to shed light on this issue.

[EDIT] Reran test and updated commands; left all calls in for additional context. Also, note that the IPs and interface names above are different due differences in my test environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue/PR relates to a bug has_pr module module plugins plugin (any type)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants