Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBAC - Phase 1 #18178

Closed
14 of 15 tasks
elasticmachine opened this issue Mar 21, 2018 · 10 comments
Closed
14 of 15 tasks

RBAC - Phase 1 #18178

elasticmachine opened this issue Mar 21, 2018 · 10 comments
Labels
Meta Team:Security Team focused on: Auth, Users, Roles, Spaces, Audit Logging, and more!

Comments

@elasticmachine
Copy link
Contributor

elasticmachine commented Mar 21, 2018

Original comment by @kobelb:

Phase 1 - Remove access to .kibana from end-users

Prior to these changes, end-users have had direct access to the .kibana index, which prevents us from applying the granular access control of OLS and RBAC. The first step in preparing for OLS and RBAC requires us to no longer allow the end-users direct access to the .kibana index, but instead to force all requests to go through the Kibana server which will enforce its’ own access control.

These changes will have negligible impact on most end-users. However, if they are using DLS/FLS to provide read-only-access to Kibana, this will break their implementation and objects that were private will now be public to all authorized users of Kibana. The following built-in roles will no longer have privileges to the .kibana index, but will instead have the following Kibana custom privileges:

  • kibana_user: all
  • kibana_dashboard_only_user: read

The role management page in Kibana will be modified to allow users to assign the Kibana custom privileges to roles, and any custom Kibana end-user roles will need to be modified to match the built-in roles. All Kibana server code that reads/writes to the .kibana index will need to be modified to use the internal Kibana user and enforce access control based on the custom privileges.

screen shot 2018-03-08 at 8 20 40 am

If we wish for this and/or subsequent phases to be shipped in a minor release, we’ll have to create separate kibana_user and kibana_dashboard_only_user roles and the user would have to opt-in to this functionality via a kibana.yml setting.

Legacy Fallback

#19824 introduces a "legacy fallback" feature which allows RBAC Phase 1 to ship in a minor release without introducing a breaking change, and without requiring users to opt-in via a kibana.yml setting.
Authorization Flow

  1. User makes request to perform an action (e.g., create a Dashboard)
  2. RBAC checks to see if the user has the appropriate Kibana Privilege.
    a. If yes, access is granted, and the request is allowed to continue.
    b. If not, Kibana will then check to see if the user has direct access to the Kibana index.
    1. If user has direct access to the Kibana index, then the index privileges (e.g., index, read) are used to determine if the request is allowed to continue, and a deprecation warning is logged.

Example auth flows

image

Tasks

  • Add extension point for encapsulating the callWithRequest/callWithInternalUser call in SavedObjectsClient
  • Add extension point for authorizing the current user to perform the SavedObjectClient call that defers to the privileges/actions in Elasticsearch's security
  • Use the custom privileges/actions methods in Elasticsearch to authorize the end-user
  • Don't create the roles if they already exist
  • Create custom privileges/actions in Elasticsearch (using their APIs) on startup
  • kibana_dashboard_only_user and kibana_user privileges on the index.
  • Allow the user to specify the rbac application name
  • Require applicationName if the index name isn't the default.
  • default resources
  • List roles for other applications, but don't let them edit the application privileges
  • Management of application privileges

Questions

  • If we always PUT the roles on startup, this could get annoying if the user didn't want to use these... should we allow this to be disabled?
  • What happens when we POST a role and the native realm is disabled.
  • Is there any chance that we get a 404 for the roles, and then create them, overwriting the existing ones because of a race condition for restoring the security indices?
  • Superusers, will they always come back with the _has_privilege to true from the ES APIs or do we need to check that somewhere in Kibana?
@elasticmachine
Copy link
Contributor Author

Original comment by @kobelb:

LINK REDACTED is the meta issue for all currently planned phases

@elasticmachine
Copy link
Contributor Author

Original comment by @kobelb:

Larry and I were talking on Zoom about what we want to do about roles that have Kibana application privileges, documented in this LINK REDACTED, being shown and edited via the Roles Management screens.

Currently, multi-tenant Kibana uses different .kibana indices to allow segmentation and users are granted access to different "tenants" by granting them privileges on the different indices. This phase will change that approach, and we'll now be granting access using the custom privileges and scoping it to the different "tenants" differently.

We used to enumerate the potential .kibana indices using the Elasticsearch list of indices; however, with the new approach we will have no such way to enumerate the list of the Kibana tenants, so to minimize the effort required we be limiting the Roles that are listed and editable via Kibana to the current tenant. Otherwise, we'll need to introduce some centralized list of tenants to allow all Kibana roles for the different tenants to be managed from a single interface.

@elasticmachine
Copy link
Contributor Author

Original comment by @epixa:

So to clarify, the issue here is that we plan to introduce a new kibana-privileges management tool in the roles UI that should only work for kibana indices, and @kobelb is trying to figure out how we could reliably show that section for each .kibana index across the entire cluster within every Kibana install.

I think this overcomplicates things.

Any given Kibana install should only treat its own indices as special in any way. Any index that isn't explicitly managed by the current install should be treated exactly the same as any other index in Elasticsearch. If someone wants to do this deployment, then they need to go to each Kibana separately to manage those privileges.

Remember, this isn't a deployment scenario that we want people to do. When we release Spaces, they will become the recommended way to handle "tenant" scenarios like this. If for some reason a person wants to still maintain different Kibana installs, they can, but the features in the product shouldn't be optimized for that deployment scenario.

@elasticmachine
Copy link
Contributor Author

Original comment by @kobelb:

If someone wants to do this deployment, then they need to go to each Kibana separately to manage those privileges.

👍 sounds like we're in agreement then, thanks!

@elasticmachine elasticmachine added Team:Security Team focused on: Auth, Users, Roles, Spaces, Audit Logging, and more! Meta labels Apr 24, 2018
@legrego legrego mentioned this issue May 16, 2018
1 task
@legrego
Copy link
Member

legrego commented May 18, 2018

@kobelb some answers & thoughts to the open questions above:

What happens when we POST a role and the native realm is disabled.

I tested a bit this morning, and disabling the native realm does not have any impact on role creation. Kibana will still create the roles on startup, and the roles appear in the role management screen as if the native realm is still enabled. If Kibana is able to determine that the native realm is disabled, then it'd probably be a good idea to let users know this in the UI, since their changes really won't have an impact.

Is there any chance that we get a 404 for the roles, and then create them, overwriting the existing ones because of a race condition for restoring the security indices?

Can we tell if ES is in the middle of a restoration? If so, we could have the Security plugin wait for the restoration to finish before creating roles/going green.

Superusers, will they always come back with the _has_privilege to true from the ES APIs or do we need to check that somewhere in Kibana?

According to Tim's comment here, it appears Superusers have all privileges on all applications, so there shouldn't be a need to have a separate check within Kibana.

@kobelb
Copy link
Contributor

kobelb commented May 18, 2018

If Kibana is able to determine that the native realm is disabled, then it'd probably be a good idea to let users know this in the UI, since their changes really won't have an impact.

That's a good point, and we do have a mitigation in place for this as well. I added the xpack.security.rbac.createDefaultRoles option that users can use to disable this functionality if they don't like it. Being able to preemptively detect it and not do it in these situations could be beneficial.

Can we tell if ES is in the middle of a restoration? If so, we could have the Security plugin wait for the restoration to finish before creating roles/going green.

That's a great question, that I don't have an answer to... we'll likely have to defer to the Elasticsearch team on this one.

According to Tim's comment here, it appears Superusers have all privileges on all applications, so there shouldn't be a need to have a separate check within Kibana.

Agreed, I should've checked this question off after testing it with the superuser changes.

This was referenced Jun 6, 2018
@jinmu03
Copy link
Contributor

jinmu03 commented Jun 22, 2018

Summary of the current approach of Phase 1

Between 6.4 and 7.0, every time Kibana starts up, Kibana (under the kibana_system role) will check the existing application privileges that are registered with the cluster. Currently, these privileges are either read or all. If the privileges do not exist, or if they don’t match what Kibana expects, then Kibana will PUT the correct set of privileges by calling the Application Privileges API.

The builtin kibana_user and kibana_dashboard_only_user roles will have Kibana's custom privileges assigned to them automatically. The user customized roles will go through the legacy fallback approach until they are updated by the end-user.

When will the legacy fallback be invoked?

The legacy fallback is only invoked when a user has no permissions through the new system. As soon as a user is granted anything in the new system, then the legacy fallback is no longer an option for them.

How the custom privileges are assigned to builtin roles and custom roles?

Kibana defines the custom permissions in Elasticsearch, and Kibana can assign the custom permissions to user created roles via the UI, but Elasticsearch has to assign the custom permissions to the builtin roles, because of how builtin roles are defined within Elasticsearch.

Under what kind of circumstances, the privileges won't match what Kibana expects?

That will happen if, for example, users are upgrading from 6.4 to 6.5 (or any version in the future). The new version of Kibana may have a different set of privileges than the old version, so in that case, the new version will overwrite the existing privileges that the old version used.

@epixa
Copy link
Contributor

epixa commented Jun 24, 2018

If user has direct access to the Kibana index, then the index privileges (e.g., index, read) are used to determine if the request is allowed to continue, and a deprecation warning is logged.

Rather than do this for every request, what do you think about doing the legacy check at initial authentication time. The idea being that the entire user's session is tagged for the new model when it is created, and then at request time we only check the auth model that is appropriate for that session.

The upside to this approach is that any given user cannot have their session influenced by both new and legacy privileges. For example, under the original legacy fallback proposal if I understand it correct, if a user has read/write through the old system and only read through the new system, when creating a dashboard we would first check the new system, see no write permission, then we check the old and see write permission, so we allow for writing. In reality we want any usage of the new system to completely invalidate any of the old rules, right?

@legrego
Copy link
Member

legrego commented Jun 25, 2018

The upside to this approach is that any given user cannot have their session influenced by both new and legacy privileges. For example, under the original legacy fallback proposal if I understand it correct, if a user has read/write through the old system and only read through the new system, when creating a dashboard we would first check the new system, see no write permission, then we check the old and see write permission, so we allow for writing. In reality we want any usage of the new system to completely invalidate any of the old rules, right?

The legacy fallback is only invoked when a user has no permissions through the new system. As soon as a user is granted anything in the new system, then the legacy fallback is no longer an option for them. So in your example above, if a user has read/write through the old system and only read through the new system, then they would not be allowed to create a dashboard.

We can certainly investaigate moving the check to login-time, but our gut feeling is that it'd be a non-trivial amount of effort. If the only motivation is to prevent a split authZ model, then it might not be necessary to persue, given the way the check is structured today.

@epixa
Copy link
Contributor

epixa commented Jun 25, 2018

Awesome. This alleviates my concern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Meta Team:Security Team focused on: Auth, Users, Roles, Spaces, Audit Logging, and more!
Projects
None yet
Development

No branches or pull requests

5 participants