Skip to content

Server Versions and Kubeturbo Tag Mappings

Jason Shaw edited this page Mar 8, 2024 · 66 revisions

Version Mapping between CWOM, IWO, Turbonomic and Kubeturbo

When deploying kubeturbo, there are 2 requirements for versions:

  1. To allow a remote probe to connect, you need to provide the first 2 digits of the Server version number that kubeturbo will connect to. This information will be put in the configMap used by Kubeturbo to register with the Turbonomic Server
  2. The kubeturbo version used should always match your Turbonomic Server version. When you update the Turbonomic Server version, you should also update kubeturbo to the same version. Note that the Turbonomic Server supports N, N-1 and N+1 for kubeturbo versions, to allow for time to update. To ensure functionality, kubeturbo should be updated after the Turbonomic Server version is updated.

Examples of Turbonomic Server - Kubeturbo version matching (**NOTE: That new Turbonomic Server versions and Kubeturbo versions that are released always match). CWOM has a different version numbering, but it is based on the same Turbonomic Server version as shown below for example.

  • Turbonomic Server version = 8.9.5
  • CWOM Server version = 3.7.5
  • Kubeturbo version = 8.9.5
  • Turbonomic Server value used in configMap = 8.9

Note:

  • You only need to specify the first 2 digits of the Turbonomic Server for the version in the kubeturbo configMap.
  • Minor releases within version (such as going from 8.9.1 to 8.9.5) do not require updates to the configMap.
  • Cisco provides IWO and CWOM 3.x.
  • While not common, a kubeturbo version may have a dot release (8.x.y.z) to provide a hot fix, and this fix will folded into the latest version release and supported going forward on the 8.x.y versions.

For more information on What's New in both the Turbo Server and Kubeturbo refer to IBM Turbonomic Documentation, and Release Notes for more details on fixes for each version.

Turbo 6 and CWOM 2.3

Turbonomic has ended support for the Turbo 6.x (CWOM 2.3.x) versions as of August 31, 2021. Deploy the latest version of Turbo / CWOM Server, and use the corresponding KubeTurbo version.

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally