Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to see non-LXD managed networks for OIDC identity with server admin entitlement #14085

Closed
mas-who opened this issue Sep 12, 2024 · 6 comments · Fixed by #14447
Closed
Assignees
Labels
Bug Confirmed to be a bug
Milestone

Comments

@mas-who
Copy link

mas-who commented Sep 12, 2024

Issue description

When making an API request to GET /1.0/networks as a TLS authenticated identity, all networks are reported by LXD in the response as shown in the example below:

{
  "type": "sync",
  "status": "Success",
  "status_code": 200,
  "operation": "",
  "error_code": 0,
  "error": "",
  "metadata": [
    {
      "name": "lxdbr0",
      "description": "",
      "type": "bridge",
      "managed": true,
      "status": "Created",
      "config": {
        "ipv4.address": "10.173.68.1/24",
        "ipv4.nat": "true",
        "ipv6.address": "fd42:fd46:adbb:ef2f::1/64",
        "ipv6.nat": "true"
      },
      "used_by": [
        "/1.0/profiles/default",
        "/1.0/instances/node-1",
        "/1.0/instances/node-2",
        "/1.0/instances/c1",
        "/1.0/instances/c2",
        "/1.0/instances/c3"
      ],
      "locations": [
        "none"
      ]
    },
    {
      "name": "lo",
      "description": "",
      "type": "loopback",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "eno1",
      "description": "",
      "type": "physical",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "wlo1",
      "description": "",
      "type": "physical",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "ovs-system",
      "description": "",
      "type": "unknown",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "genev_sys_6081",
      "description": "",
      "type": "unknown",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "br-int",
      "description": "",
      "type": "bridge",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "docker0",
      "description": "",
      "type": "bridge",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "tapb04dfee5",
      "description": "",
      "type": "unknown",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "tap0b6e4394",
      "description": "",
      "type": "unknown",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    },
    {
      "name": "tap1dc397ab",
      "description": "",
      "type": "unknown",
      "managed": false,
      "status": "",
      "config": {},
      "used_by": [],
      "locations": null
    }
  ]
}

However, when making the same request as an OIDC authenticated identity, with server admin entitlement assigned, the API response shows only the LXD managed networks as shown below.

{
  "type": "sync",
  "status": "Success",
  "status_code": 200,
  "operation": "",
  "error_code": 0,
  "error": "",
  "metadata": [
    {
      "name": "lxdbr0",
      "description": "",
      "type": "bridge",
      "managed": true,
      "status": "Created",
      "config": {
        "ipv4.address": "10.173.68.1/24",
        "ipv4.nat": "true",
        "ipv6.address": "fd42:fd46:adbb:ef2f::1/64",
        "ipv6.nat": "true"
      },
      "used_by": [
        "/1.0/profiles/default",
        "/1.0/instances/c1",
        "/1.0/instances/c2",
        "/1.0/instances/c3",
        "/1.0/instances/node-1",
        "/1.0/instances/node-2"
      ],
      "locations": [
        "none"
      ]
    }
  ]
}

Since the OIDC identity has server admin entitlement, I would expect to see all networks shown within the API response. This seems like a possible bug in permissions check specifically related to networks.

The above behaviour was seen in the latest/edge and 5.21/edge LXD snap.

@tomponline
Copy link
Member

@markylaing is this a bug?

@tomponline
Copy link
Member

@mas-who which LXD version please

@mas-who
Copy link
Author

mas-who commented Sep 12, 2024

@mas-who which LXD version please

@tomponline I tested this with LXD snap for both latest/edge and 5.21/edge channels. Also added to the issue description.

@tomponline
Copy link
Member

@mas-who which LXD version please

@tomponline I tested this with LXD snap for both latest/edge and 5.21/edge channels. Also added to the issue description.

Please can you provide output of snap list so we can see specific version (please make that a default habit for opening issues too).

@mas-who
Copy link
Author

mas-who commented Sep 12, 2024

@mas-who which LXD version please

@tomponline I tested this with LXD snap for both latest/edge and 5.21/edge channels. Also added to the issue description.

Please can you provide output of snap list so we can see specific version (please make that a default habit for opening issues too).

Sure will make sure to include this in the future, here's the snap list output:

Name                       Version                 Rev    Tracking         Publisher         Notes
bare                       1.0                     5      latest/stable    canonical✓        base
code                       4849ca9b                168    latest/stable    vscode✓           classic
core                       16-2.61.4-20240607      17200  latest/stable    canonical✓        core
core18                     20240612                2829   latest/stable    canonical✓        base
core20                     20240416                2318   latest/stable    canonical✓        base
core22                     20240823                1612   latest/stable    canonical✓        base
core24                     20240710                490    latest/stable    canonical✓        base
firefox                    130.0-2                 4848   latest/stable/…  mozilla✓          -
gnome-3-34-1804            0+git.3556cb3           93     latest/stable    canonical✓        -
gnome-3-38-2004            0+git.efb213a           143    latest/stable/…  canonical✓        -
gnome-42-2204              0+git.510a601           176    latest/stable/…  canonical✓        -
gtk-common-themes          0.1-81-g442e511         1535   latest/stable/…  canonical✓        -
konf                       0+git.129ff3c           50     latest/stable    canonicalwebteam  -
kubectl                    1.14.10                 1377   1.14/stable      canonical✓        classic
lxd                        git-c6d9159             30228  latest/edge      canonical✓        -
microceph                  0+git.7b5672b           707    quincy/stable    canonical✓        held
microcloud                 1.1-04a1c49             734    latest/stable    canonical✓        -
microovn                   22.03.3+snap0e23a0e4f5  395    22.03/stable     canonical✓        -
parca-agent                0.30.0                  1763   latest/stable    parca-team✓       classic
rockcraft                  1.5.3                   2218   latest/stable    canonical✓        classic
slack                      4.39.95                 158    latest/stable    slack✓            -
snap-store                 41.3-77-g7dc86c8        1113   latest/stable/…  canonical✓        -
snapd                      2.63                    21759  latest/stable    canonical✓        snapd
snapd-desktop-integration  0.9                     178    latest/stable/…  canonical✓        -
yq                         v4.44.2                 2566   latest/stable    mikefarah         -

@markylaing
Copy link
Contributor

Thanks for reporting @mas-who. @tomponline yes this is a bug, we've documented that admin on server should grant the equivalent of unix socket access.

The reason for this is that the OpenFGA driver relies on the database, but these networks are not in the database! It's an edge case that I should have thought about.

I think we should add a new entitlement to server, called can_view_host_networks, which will grant the identity permission to view these networks and will be granted automatically if they have admin.

@tomponline tomponline added this to the lxd-6.2 milestone Sep 13, 2024
@tomponline tomponline added the Bug Confirmed to be a bug label Sep 13, 2024
tomponline added a commit that referenced this issue Nov 12, 2024
While working on #14085 I set up
a new fine-grained TLS identity and issued the following commands as
that identity, without any permissions yet (I forgot I'd changed my
default remote):
```
$ lxc auth group create tmp
Error: Forbidden
$ lxc auth group permission add tmp server admin
Error: Failed to check OpenFGA relation: No such entity "/1.0/auth/groups/tmp"
```
Creating the group failed, this is correct behaviour. 

When attempting to add a permission to the non-existent group, the
request failed (correct) but the OpenFGA Authorization driver returned
the above error. This is incorrect.

This PR checks if the error returned by a `Check` request on the
embedded OpenFGA server is a `Not Found` error and returns a generic not
found error. This makes errors returned by the authorizer consistent. We
are masking all not found errors returned before access control
decisions are made to prevent discovery. After this change, the same
command returns:
```
$ lxc auth group permission add tmp server admin
Error: Not Found
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants