-
Notifications
You must be signed in to change notification settings - Fork 75
Kubeturbo Cluster Roles
If you are deploying Kubeturbo with the Operator, the Operator itself can run with either cluster role:
- cluster-admin
- minimum privilege cluster role that is default in the OpenShift Operator Hub Deployment. For manually deploying the Operator, the cluster role is here
Kubeturbo mediation probe can run with 3 different cluster role options that can control the ability to execute actions.
- cluster-admin Role.
Note this is the default role assigned to the Kubeturbo service account.
- Execute Actions Role
- Read-Only Role (discovery and metrics only)
- Changing Roles
You can choose to run with a custom role that provides the minimum privileges with the ability to execute actions. The yaml to use for this is here and has a Cluster Role name of turbo-cluster-admin
Steps to use this custom Execute Actions Cluster Role
- Create the new Cluster Role
turbo-cluster-admin
yaml here - Update the Cluster Role Binding yaml here to use the new custom role named
turbo-cluster-admin
under theroleRef
section as shown in the yaml example below
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: turbo-all-binding-kubeturbo-turbo
subjects:
- kind: ServiceAccount
name: turbo-user
namespace: turbo
roleRef:
kind: ClusterRole
name: turbo-cluster-admin
apiGroup: rbac.authorization.k8s.io
-
Create the Cluster Role Binding (default name is
turbo-all-binding-kubeturbo-turbo
) -
Continue with the rest of the deployment of kubeturbo using the custom Cluster Role defined
Steps to use this custom Execute Actions Cluster Role
- Update your kubeturbo deployment yaml with the additional parameter name called
roleName:
as shown below with the value ofturbo-cluster-admin
apiVersion: charts.helm.k8s.io/v1
kind: Kubeturbo
metadata:
name: kubeturbo-release
namespace: turbo
spec:
serverMeta:
turboServer: 'https://MY_TURBO_SERVER_URL'
targetConfig:
targetName: MY_CLUSTER_NAME
roleName: turbo-cluster-admin
- Continue with the rest of the deployment of kubeturbo using the custom Cluster Role defined
Steps to use this custom Execute Actions Cluster Role
-
Add the following parameter your helm install command:
--set roleName=turbo-cluster-admin
-
Optionally, specify the
roleName:
parameter with a value ofturbo-cluster-admin
in the values.yaml file.
You can choose to run with a custom role that provides the read-only privileges with the ability to discover your environment and collect metrics only. The yaml to use for this is here and has a Cluster Role name of turbo-cluster-reader
Steps to use this custom Read-Only Cluster Role
- Create the new Cluster Role
turbo-cluster-reader
yaml here - Update the Cluster Role Binding yaml here to use the new custom role named
turbo-cluster-reader
under theroleRef
section as shown in the yaml example below
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: turbo-all-binding-kubeturbo-turbo
subjects:
- kind: ServiceAccount
name: turbo-user
namespace: turbo
roleRef:
kind: ClusterRole
name: turbo-cluster-reader
apiGroup: rbac.authorization.k8s.io
-
Create the Cluster Role Binding (default name is
turbo-all-binding-kubeturbo-turbo
) -
Continue with the rest of the deployment of kubeturbo using the custom Cluster Role defined
Steps to use this custom Read-Only Cluster Role
- Update your kubeturbo deployment yaml with the additional parameter name called
roleName:
as shown below with the value ofturbo-cluster-reader
apiVersion: charts.helm.k8s.io/v1
kind: Kubeturbo
metadata:
name: kubeturbo-release
namespace: turbo
spec:
serverMeta:
turboServer: 'https://MY_TURBO_SERVER_URL'
targetConfig:
targetName: MY_CLUSTER_NAME
roleName: turbo-cluster-reader
- Continue with the rest of the deployment of kubeturbo using the custom Cluster Role defined
Steps to use this custom Read-Only Cluster Role
-
Add the following parameter your helm install command:
--set roleName=turbo-cluster-reader
-
Optionally, specify the
roleName:
parameter with a value ofturbo-cluster-reader
in the values.yaml file.
If you deployed Kubeturbo with the Operator configured with the turbo-cluster-reader
role for example (or any role that you want to change after initial deployment) and now you need to change that to the elevated turbo-cluster-admin
role for example you need to do the following get successfully configure Kubeturbo to use the new role:
- Update the kubeturbo-release yaml with the new role as detailed here
- Delete the Cluster Role Binding (CRB) that will start with the naming convention
turbo-all-binding-kubeturbo
, as this does not automatically get updated/patched with the new role you want to use and you will see errors in the operator log showing something similarfailed upgrade (cannot patch \"turbo-all-binding-kubeturbo-release-turbo3\" with kind ClusterRoleBinding
, full log error example below
{"level":"error","ts":1699544663.525695,"logger":"helm.controller","msg":"Release failed","namespace":"turbo3","name":"kubeturbo-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Kubeturbo","release":"kubeturbo-release","error":"failed upgrade (cannot patch \"turbo-all-binding-kubeturbo-release-turbo3\" with kind ClusterRoleBinding: ClusterRoleBinding.rbac.authorization.k8s.io \"turbo-all-binding-kubeturbo-release-turbo3\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"ClusterRole\", Name:\"turbo-cluster-reader-kubeturbo-release-turbo3\"}: cannot change roleRef) and failed rollback: no ClusterRole with the name \"turbo-cluster-admin-kubeturbo-release-turbo3\" found"
- Once the CRB is deleted the error above will be gone and Kubeturbo will now be using the elevated role.
If you deployed Kubeturbo without an Operator configured with the turbo-cluster-reader
role for example (or any role that you want to change after initial deployment) and now you need to change that to the elevated turbo-cluster-admin
role for example you need to do the following get successfully configure Kubeturbo to use the new role:
Introduction
Kubeturbo Use Cases
Kubeturbo Deployment
Kubeturbo Config Details and Custom Configurations
Actions and how to leverage them
- Overview
-
Resizing or Vertical Scaling of Containerized Workloads
a. DeploymentConfigs with manual triggers in OpenShift Environments - Node Provision and Suspend (Cluster Scaling)
- SLO Horizontal Scaling
- Turbonomic Pod Moves (continuous rescheduling)
-
Pod move action technical details
a. Red Hat Openshift Environments
b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
- Startup and Connectivity Issues
- KubeTurbo Health Notification
- Logging: kubeturbo log collection and configuration options
- Startup or Validation Issues
- Stitching Issues
- Data Collection Issues
- Collect data for investigating Kubernetes deployment issue
- Changes to Cluster Role Names and Cluster Role Binding Names