Skip to content

4. Turbonomic Multinode Deployment Steps

Jason Shaw edited this page Aug 16, 2024 · 48 revisions

Turbonomic Multi-node Deployment Steps

Turbonomic uses an Operator pattern to deploy and manage your Turbonomic instance. The steps will be:

  1. Create a dedicated namespace to deploy into
  2. Review existing/Create a new storage class to use
  3. Create the Custom Resource Definition (Xl).
  4. Create the resources for the Operator, and deploy the Operator pod
  5. Modify the Turbonomic Custom Resource (xl-release) that governs what is deployed
  6. Apply the Custom Resource yaml to deploy Turbonomic

CRD and Operator Setup

Obtain the yamls required to deploy Turbonomic. First decide how you want to call the yaml files which will define your {path_to_file}. Options are:

  • Clone the GitHub project for the Turbonomic XL Operator Deployment: git clone https://github.com/IBM/t8c-operator.git
  • Switch to the appropriate branch you need to use: For example, if you want 8.0.x use: git checkout 8.0 Use main branch for latest GA

NOTE:

  • You will want to work with your own copy of the custom resource yaml: charts_v1_xl_cr.yaml. SAVE THIS FILE – it defines the CONFIGURATION OF YOUR TURBO SERVER!

Get started:

  • Create a new namespace. As an example, this will use turbonomic:
kubectl create namespace turbonomic
  • If you want to set the new turbonomic namespace as your default use the command below:
kubectl config set-context --current --namespace=turbonomic
  • Running OpenShift? You will want to get the user id range for your project, and provide that to the deployment via the custom resource. See the section below OpenShift Security Context

Create the Custom Resource Definition (CRD)

  • Create the custom resource definition to allow the Turbo operator to deploy all the necessary resources.

NOTE:

  • Changes in the k8s API are enforced in k8s 1.22 and higher. For new deployments, use one of the following CRD yamls depending on the version of k8s you are running.
  • You do not need to change an existing CRD if you start on k8s 1.21 or older, and upgrade to 1.22.
  • This is a cluster wide resource and will need cluster admin role to create.
  • For Kubernetes version 1.22 and higher:
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/crds/charts_v1_xl_crd.yaml

or

  • ONLY NEEDED IF running Kubernetes version 1.11 up to 1.21:
kubectl create -f https://raw.githubusercontent.com/turbonomic/t8c-install/master/operator/deploy/crds/charts_v1alpha1_xl_crd.yaml

or
kubectl create -f {path_to_file}/deploy/crds/charts_v1alpha1_xl_crd.yaml

Deploy the Operator

These next steps set up credentials, and a custom resource definition to deploy the operator. These resources are namespaced, and you only need to be the admin of your namespace / project.

  • Create the operator service account.
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/service_account.yaml -n turbonomic

or
kubectl create -f {path_to_file}/deploy/service_account.yaml -n turbonomic

  • IF you DO NOT want to use Embedded Reporting, kubeturbo or Turbo on Turbo in the deployment - Create this Role.
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/role.yaml -n turbonomic

or
kubectl create -f {path_to_file}/deploy/role.yaml -n turbonomic

  • IF you DO want to use Embedded Reporting, kubeturbo or Turbo on Turbo in the deployment - Create this Cluster Role.
  • NOTE - it is recommended in production that you deploy kubeturbo separately following the wiki here as that will allow you to configure and use additional kubeturbo configuration options not available if deploying in the Turbonomic deployment
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/cluster_role.yaml

or kubectl create -f {path_to_file}/deploy/cluster_role.yaml -n turbonomic

  • IF you created the Role above to NOT use Embedded Reporting, kubeturbo or Turbo on Turbo - Create this Role Binding.
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/role_binding.yaml -n turbonomic

or
kubectl create -f {path_to_file}/deploy/role_binding.yaml -n turbonomic

  • IF you created the Cluster Role above to USE Embedded Reporting, kubeturbo or Turbo on Turbo - Create this Cluster Role Binding.
  • NOTE - it is recommended in production that you deploy kubeturbo separately following the wiki here as that will allow you to configure and use additional kubeturbo configuration options not available if deploying in the Turbonomic deployment
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/cluster_role_binding.yaml -n turbonomic

or
kubectl create -f {path_to_file}/deploy/cluster_role_binding.yaml -n turbonomic

  • Launch the operator pod.

Note: You should confirm the operator image tag that you want to use for your environment. Turbonomic Operator Version required by Turbonomic Server version is reported in Release Notes. The latest version of the operator can manage older product versions of the server.

  1. Go to Turbonomic Documentation online
  2. Selected the latest Turbonomic Application Resource Management product version
  3. Select Release Notes. For latest RNs go here.
  4. In Release Notes go to "Configuration Requirements" and then "Turbonomic Updates and Operator Version"
  5. Take this value and modify the image tag value in the t8c-operator deployment yaml
kubectl create -f https://raw.githubusercontent.com/IBM/t8c-operator/main/deploy/operator.yaml -n turbonomic

or
kubectl create -f {path_to_file}/deploy/operator.yaml -n turbonomic

  • Wait for the operator to become available (status = running with 1/1 ready). Check status using:
kubectl get pods -n turbonomic -w

or

watch kubectl get pods -n turbonomic

Continue to the next section to configure a custom resource which will launch the Turbonomic deployment.

Configure the Turbonomic Instance: The Custom Resource

These next steps are to deploy Turbonomic by using a custom resource where you will specify some deployment configurations, and anything that is not default. Turbonomic can provide a base one for you to use. These CR resource is namespaced, and you only need to be the admin of your namespace / project to create an instance of the Turbonomic platform.

When working with YAML files, follow the tips in this article.

  • Open deploy/crds/charts_v1_xl_cr.yaml, or a sample one provided here, in a text editor. Make the modifications required for your environment. The next section outlines options.
  • Apply the custom resource file to launch Turbonomic.
kubectl apply –f charts_v1_xl_cr.yaml -n turbonomic

NOTE: Turbonomic provides many configuration options. Common ones are listed below. For a complete list of options, refer to our custom resource definition for our operator under the validation openAPIV3Schema section in the CRD yaml here.

Table of common configuration parameters: Required

Configuration - Required Modification (under global unless specified) Default
Turbonomic Version tag:{version to deploy} See CR and operator yamls
Targets (*) All target probes are formatted as in this example:vcenter: enabled: true None enabled
Database: Remote Server several parameters. See article here None. Must configure for a database server
Database: Containerized remove externalDbIP parameter None. Must configure for deploying a containerized DB

(*) For list of supported target probes and parameters required to enable, see this sample CR used in OVA deployments. You may also contact Turbo Support or your Turbo representative.

Table of common configuration parameters:

NOTE: Some of these parameters could be REQUIRED for your k8s cluster

Configuration - Optional Modification (under global unless specified) Default
Disable default ingress #If you want to use your own ingress disable ngnix, and refer to this page. Requirement for OpenShift to configure nginx as a proxy and not primary ingress.
Private repo and image pull credentials See Working with a Private Repo & Image Pull Secrets Docker Hub “turbonomic”
Non-default storage class storageClassName: {value} Use cluster’s default sc. For more information about Storage Class Requirements and security context for non-root users, if applicable, see this page
External DB externalDBName: {value} Additional parameters will be required. See Using a Database Server or Service None - Local containerized DB and PV
Specifying the group id for all pods securityContext: fsgroup: {value that will work for your group id range} Some run with 2000, 1000 See OpenShift Security Context and also information related to storage classes for all other k8s deployments including IBM's IKS
IAM Role support for AWS Mediation See AWS Target IAM Role Requirements & Granular Pod Level Access for complete details and supported configurations. Leveraging a k8s cluster’s configured OIDC provider and web hook method, configure a service account with this support, and specify this SA to the AWS mediation components. Optionally configure the default t8c-operator SA with this support. IAM User
NGINX Service annotations (*) ingress: annotations: service.kubernetes.io/{annotation}: “{value}” See NGINX Service Configuration Options and Using Platform Provided Ingress None
Using a self-signed certificate for UI HTTPS Relevant for provided nginx ingress only. Refer to on-line documentation here. Running in AWS and have AWS Cert Manager (ACM)? Refer to Self Signed Certs and AWS Certificate Manager Unsigned cert
Enabling secure LDAP integration Refer to on-line documentation here. Configure post deployment: Configure LDAP first in UI, then update the Turbo configuration to apply the certificate. None
Enable SSO integration Refer to on-line documentation here. Configure post deployment. Not enabled
Enable self-monitoring: KubeTurbo kubeturbo: enabled: true. Also for t8c-operator SA use use the the cluster role here and cluster role binding here Not enabled
Enable Monitoring of other k8s clusters See KubeTurbo Deployment Options
Enable Turbo on Turbo APM example Start with the sample CR yaml here to enable prometheus, exporters, prometurbo, and KubeTurbo. Grafana is optional. Also for t8c-operator SA use use the the cluster role here and cluster role binding here Not enabled
Reporting integration – SaaS Based GA Coming Soon Not enabled
Reporting integration – Embedded Reporting Contact Turbonomic support for information to plan this setup. Designed for medium – small environments. Also for t8c-operator SA use use the the cluster role here and cluster role binding here Not enabled
Logging options Forward logs to a syslog collector. Refer to Logging Options for details All logging centralized to rsyslog pod log

OpenShift Security Context

The Turbonomic application will create PVs and to have the services access their PVs. We will use the UID value of the sa.scc.uid-range of the project. To get this value please run oc describe project yourProject. In the output example below we will use 1000690000.

Name:			yourProject
Created:		2 weeks ago
Labels:			<none>
Annotations:		openshift.io/description=
			openshift.io/display-name=
			openshift.io/requester=system:admin
			openshift.io/sa.scc.mcs=s0:c26,c20
			openshift.io/sa.scc.supplemental-groups=1000690000/10000
			openshift.io/sa.scc.uid-range=1000690000/10000`

Use this value in the Turbonomic CR with securityContext: fsGroup: parameter as below:

global:
 imagePullSecret: turbocred
 repository: turbonomic
 securityContext:
   fsGroup: 1000690000
 tag: 8.31.2

Return to the setup to deploy the Operator

(*) NGINX Service Configuration Options

The default Turbonomic configuration will set up an nginx deployment in the Turbonomic namespace, create the service “nginx” (type: LoadBalancer, externalTrafficPolicy: Local), and will attempt to get a public external IP. You now have several options on how to have both the routing logic defined in our nginx service, and the flexibility to define an ingress/route to Turbonomic, or annotate load balancer configurations.

Running OPENSHIFT? Review configuring OCP ROUTES here

Option 1: NGINX as Proxy + bring your own Ingress/Route

Starting with Turbonomic 8.3.2, you can now configure nginx as a ClusterIP type service, allowing you to use your own ingress / route, and still maintain the Turbonomic internal routing rules and leverage nginx as a proxy. This is required to leverage embedded reporting on your Turbonomic instance with your own ingress/route. You will use the parameter nginxIsPrimaryIngress set to false. Note you do not need this configuration if you are running embedded reporting with the Turbo provided nginx service as the ingress.

  kind: XL
  metadata:
    name: xl-release
  spec:
    nginx:
      nginxIsPrimaryIngress: false
    #use openshiftingress and nginxingress if you would like Turbo to create a single route that will point to the nginx service
    #openshiftingress:
    #  enabled: true
    #nginxingress:
    #  enabled: true

To create your own INGRESS, refer to Using Platform Provided Ingress and refer to the INGRESS minimum requirements here

OPTION 2: NGINX as a LoadBalancer Service Type + customize annotations (Private IP, etc)

Concepts the user must understand. Turbonommic does not deploy an ingress controller. We deploy nginx as a k8s service which can either create a cloud provider LB (external type) or used as a internal clusterIP service type to use behind a customer/platform provided ingress.

The user must understand these concepts themselves:

  1. the types of LBs available to you based on your cloud / infrastructure provider
  2. understand all service annotations available and understand what is required for your environment.
  3. Turbonomic does have an opinion on LBs

Bottom line: To modify the nginx LB type service to use internal IP address on your load balancer, use the required annotation depending on your k8s platform and version. Use an annotation that is applicable for your environment.

The Turbonomic platform provides in the CR a way to put into the nginx service the proper annotations you have determined are required for your LB in this place in the XL Custom Resource:

  global: 
    ingress:
      annotations:
        #provide the correct annotations based on the LB you want and the properties you want.

OPTION 2 with NGINX External Traffic Policy

In some environments where you are using the nginx service as your ingress (as LoadBalancer type), you will want to change the externalTrafficPolicy from the default of Local to a value like Cluster. Add this parameter under nginx and remember to combine with any other parameters you may have defined here:

 spec:
   global:
     ingress:
       annotations:
   nginx:
     externalTrafficPolicy: Cluster

Self Signed Certs and AWS Certificate Manager

If you have deployed the Turbo Server on a k8s cluster running in AWS, using the Turbo provided NGINX as a LoadBalancer type service, AND using AWS Certificate Manager, you can leverage ACM to provide a certificate to the AWS LB created for the Turbo NGINX Service using an annotation of service.beta.kubernetes.io/aws-load-balancer-ssl-cert with the ACM ARN on the Service.

  1. Create an AWS Cert ARN
  2. In the Turbo CR, under ingress annotations, supply the AWS aws-load-balancer-ssl-cert annotation and ACM ARN. This will allow the LB created for our NGINX SERVICE to use this Cert for TLS termination at the LB that would be COMBINED with another service annotation required for your LB type that you have choosen. Follow AWS documentation for the exact syntax for the annotation required.
  3. Apply the CR. There will be an NGINX SERVICE created that will reference this Certificate to be used for TLS termination on the LB created.
  4. If you have already deployed the CR and you want to modify, apply the CR change, delete the existing NGINX SERVICE and the operator will recreate it with the annotation for the cert.

Next Step: Proceed to Deployment Validation and First Time Setup steps.