Skip to content

GitOps Integration

Jason Shaw edited this page Mar 28, 2024 · 22 revisions

GitOps

GitOps upholds the principle that Git (arguably any similar version control system) is the one and only source of truth. The desired state of the system is stored in this source of truth and some CD pipeline will ensure to sync these desired state configurations in the target system, which in our case is a Kubernetes or Red Hat OpenShift cluster. If going by very strict GitOps principles, any changes to the desired state should only be made in the source of truth to ensure all changes are fully traceable commits and history associated with committer information, commit IDs and date/time stamps

What does it mean to have GitOps integration with Turbonomic

Turbonomic is an analysis engine which observes the current state of the system and provides actionable insights to the desired state of the system to continuously drive towards a more optimised state. These actions mean changes requested to the resource specs. Turbo already has capabilities to apply changes directly to the resources that exist in the k8s cluster, for example update the resource values as a result of a resize action. With GitOps however, because there is a CD tool which ensures to sync the desired spec of the resources from the source of truth, the change applied locally by Turbo will be immediately updated back by the CD tool. Also, if the system is managing configurations (ie k8s resource specs) with GitOps, strictly adhering to the principles, the changes should ideally be applied back to the source of truth with full trace of change history. This can also mean that an external approval (think Pull Requests) might also be enforced by the system to complete the change.

KubeTurbo and GitOps8

An alternative pattern would be that the CD pipeline can enforce a loose sync of the desired state, especially certain dynamic fields, for example resource replicas in a deployment or a replicaset controlled by an autoscaler (aka Turbo or HPA), with the target system. In this particular case, the CD pipeline should be able to take instructions to skip certain sections of the desired spec and let the system itself, or another controller(HPA) or system(Turbo) manage the same. Nevertheless the aim of this integration is to allow Turbo to be able to provide value to the applications managed by these GitOps pipelines by still being able to apply actions and drive the system towards an optimised state.

Integration points

There are a bunch of CD implementations available which provide a mechanism to have a GitOps pipeline, Argo CD and Flux being the most popular ones with Kubernetes and Red Hat OpenShift. While it is almost impossible to have integrations with every combination and permutation of each and every tool, Centralized Version Control (CVS) systems and usable patterns, we have been able to start building support for the same one step at a time.

For starters, we have built support for pipelines which use Argo CD as the CD tool with GitHub and/or GitLab as the source of truth with Argo CD configured individually for a cluster. This support as of now works for applications managed as plane resource yamls only, operator based deployments are not supported at this time.

Limitation and Future considerations

Currently we can discover all applications in Argo CD and show them in Turbonomic as a business application, but can only update Git sources in GitHub and/or GitLab. Future consideration is supporting BitBucket as a Git source and Flux as the CD tool, if you would like us to add support for BitBucket or other Git sources of truth or Flux or other CD tools, please log an IBM Idea for Turbonomic for such an enhancement here

Argo CD integration

Argo CD manages the applications via an application custom resource (CR). This application CR per application is created and managed in the Argo CD system namespace. Turbo leverages this application resource by auto-discovering and representing the same as a business application in its supply chain. From the Argo CD application Turbo can also understand what resources are managed by it and represents the mapping in its supply chain.

As an example the following turbo supply chain shows a view scoped to a business application which also selects the relevant managed resources:

image

The details of the Argo CD application looks like below in the Argo CD UI:

image

Also the detailed resource spec of each of the the Argo CD Applications which Turbo discovers and uses the details is shown below:

Screenshot 2024-03-25 at 10 26 22 AM

Which looks like the below in one of the Applications

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  creationTimestamp: "2023-04-11T07:20:22Z"
  generation: 113
  name: gitops-multifile
  namespace: argocd
  resourceVersion: "2248997"
  uid: 9c10f6a5-fbcc-43a8-b04a-976acd29fa6d
spec:
  destination:
    namespace: argocd-test
    server: https://kubernetes.default.svc
  project: default
  source:
    path: gitops-multifile
    repoURL: https://github.com/irfanurrehman/Turbo-ScenarioAsCode.git
    targetRevision: HEAD
  syncPolicy:
    syncOptions:
    - RespectIgnoreDifferences=true
    - PruneLast=true
    - CreateNamespace=true
    - ApplyOutOfSyncOnly=true

Turbo also uses the resources section populated by Argo CD in the status section of the Argo CD application to understand the details of the resources managed by the application. A snippet of status.resources section of the above application is as below:

resources:
  - health:
      status: Healthy
    kind: Service
    name: bee
    namespace: argocd-test
    status: Synced
    version: v1
  - group: apps
    health:
      status: Healthy
    kind: Deployment
    name: beekman-change-reconciler
    namespace: argocd-test
    status: Synced
    version: v1
  - group: apps
    health:
      status: Healthy
    kind: Deployment
    name: beekman-one-more
    namespace: argocd-test
    status: Synced
    version: v1

Turbo can understand the source of truth details, for example , repo, branch, path, etc. from this Argo CD application and drive the required change back to the source of truth as a direct commit or via a PR/MR back into the resource yaml stored in the source of truth.

The below Configuration section assumes that the user has Argo CD installed and active in the cluster also managed by Turbo. Although Turbo will be able to discover most of the details from the Argo CD applications created to manage the resources, some details, like git credentials will still need to be configured in kubeturbo running in the same cluster.

Configurations

Create Developer Access Token in Git

  1. In GitHub under Settings, Developer Settings, Personal Access Tokens
  2. Generate a new token and grant the following permissions: delete:packages, gist, notifications, repo, write:discussion, write:packages
  3. Copy the token, as you will need it in the next step when creating the secret
Screenshot 2024-03-22 at 10 39 52 PM

Create Secret to store Access Token

  1. Create new secret in the same namespace that Kubeturbo is deployed in
  2. Key: token
  3. Value: access token created above in base64
  4. Type: Opaque

Example below:

kind: Secret
apiVersion: v1
metadata:
  name: github
  namespace: turbo
data:
  token: Z2hwX0xEdkFvcUZOZkabcDEfa053cGVwWXd2SDFkS2V1MQ==
type: Opaque

Kubeturbo deployed with plane yamls

The options to be configured in the kubeturbo deployment, if deployed using plane yamls, are listed below. These configurations will be globally used for all Argo CD apps deployed in that cluster:

--git-email string                           The email to be used to push changes to git.
--git-secret-name string                     The name of the secret which holds the git credentials.
--git-secret-namespace string                The namespace of the secret which holds the git credentials.
--git-username string                        The user name to be used to push changes to git.
--git-commit-mode                            The commit mode that should be used for git action executions. One of request|direct. Defaults to direct.

An exampe yaml to elaborate the args section is listed below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubeturbo-test
....
....      
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubeturbo-test
  template:
    metadata:
      annotations:
        kubeturbo.io/monitored: "false"
      labels:
        app: kubeturbo-test
    spec:
      containers:
      - args:
        - --turboconfig=/etc/kubeturbo/turbo.config
        - --v=2
        - --kubelet-https=true
        - --kubelet-port=10237
        - [email protected]
        - --git-secret-name=your-github-secret-name
        - --git-secret-namespace=turbo
        - --git-username=yourusername
        - --git-commit-mode=direct
        image: turbonomic/kubeturbo:latest
        imagePullPolicy: Always
        name: kubeturbo-test
        resources: {}
...
...

Kubeturbo deployed with Operator

Below options will need to be configured in the operator CR spec. These configurations will be globally used for all Argo CD apps deployed in that cluster:

apiVersion: charts.helm.k8s.io/v1
kind: Kubeturbo
metadata:
  name: kubeturbo-sample
spec:
  HANodeConfig:
    nodeRoles: '"master"'
  args:
    kubelethttps: true
    kubeletport: 10250
...
...
  # [ArgoCD integration] The email to be used to push changes to git.
   gitEmail: "[email protected]"
  # [ArgoCD integration] The username to be used to push changes to git.
   gitUsername: "yourusername"
  # [ArgoCD integration] The name of the secret which holds the git credentials.
   gitSecretName: "your-github-secret-name"
  # [ArgoCD integration] The namespace of the secret which holds the git credentials.
   gitSecretNamespace: "turbo"
  # [ArgoCD integration] The commit mode that should be used for git action executions. One of {request|direct}. Defaults to direct.
   gitCommitMode: "direct"

OPTIONAL - Fine grained app specific configurations

Its is possible that Argo CD apps syncing from different sources of truth (for example different github repos) exist in the same cluster. It is also possible that the credential information for different source repos are different. The app specific fine grained configuration provides a mechanism to have different configuration information for different apps. It is achieved via a config custom resource. If using plane old yaml mechanism the user will need to first install the CRD from here. Once the type definition is available, a configuration could be used as provided in a sample below:

apiVersion: gitops.turbonomic.io/v1alpha1
kind: GitOps
metadata:
  name: gitops-sample
spec:
  config:
    - commitMode: direct
      credentials:
        email: [email protected]
        secretName: gitops-secret
        secretNamespace: gitops
        username: turbo
      selector: '^turbo.*$'
      whitelist:
        - app-name-1
        - app-name-2

This resoure is namespaced and a single, or multiple non conflicting configurations coule be supplied per namespace. If an app is selected as per the whitelist or the selector then the credentails and commitMode listed in the GitOps resource spec will be used instead of the global ones configured in kubeturbo.

Cluster local resource update

As listed in the section What does it mean to have GitOps integration with turbo above, it might be desirable to have Turbo update the resources locally and directly in the cluster, rather then pushing the changes back into the source of truth. There can be multiple reasons for choosing this.

  • Security: Users want to minimize the credential and the source of truth access.
  • Faster optimisation: The turn around time for the resource to be updated would be bigger if the update has to be made back in the source of truth, simply because of the number of systems involved in the same, and possibility of failures and retries. Turbo should be able to push a commit or a PR back to the git repo. If its a PR an user needs to approve the same. Argo CD even if running in autosync mode should be able to pull and sync the resource into the cluster. Instead, a resource could simply be updated locally in the cluster.
  • Multicluster scenarios: It is possible that an unified app definition is used by multiple clusters, all of them syncing from a base definition. Optimisations from one cluster might not apply to another cluster. If the updates are made back to the source of truth, then the change applies to all the clusters.
  • Newer or unsupported tooling in the pipeline: It is possible that Turbo does not have the capability to either understand the pipeline or directly interact with the gitops system.

To use this pattern Argo CD app could be configured to skip sync of particular fields from the child resources. spec.ignoreDifferences allows to not trigger an out of sync when the cluster local value changes, for details see the Argo CD documentation here. RespectIgnoreDifferences allows to ignore copying over the original values into the cluster local resources, in case the sync happens because of other field changes as described in Argo CD documentation here.

To utilise this pattern both options need to be used in the Argo CD applications.

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally